Potential of AI rests on two things: machine learning and a set of ethical principles
As machine learning technology advances, data becomes more insightful, and processing power increases, AI’s potential is becoming more evident.
However, for AI to become truly powerful, it needs to be reliable, and machine learning models require standardization to achieve real progress.
Although some people may fear that AI and robots will take over the world, the technology is still in its early stages of development.
Furthermore, the business ecosystem surrounding AI has fundamental issues that need to be addressed before it can fully realize its potential.
Despite these challenges, there are two promising developments that give me hope for the future of AI. These developments could potentially eliminate concerns about the rise of machines.
Functionality, fairness and faith
To achieve the significant transformation that AI and machine learning (ML) promise, we must have faith in the output they generate. However, establishing trust has not been an easy task thus far.
For example, the healthcare sector, particularly the overstretched National Health Service (NHS), faces significant challenges in deploying AI to ease the burden of clinicians. It is crucial to have complete confidence that the recommendations produced by these systems are at least as accurate as a human clinician, given the stakes at hand.
Fortunately, the landscape is changing, and we are making progress in establishing an effective assurance ecosystem.
The UK Government’s report from the Centre for Data Ethics and Innovation, published last December, has paved the way for the development of a formal stamp of approval for innovative and safe ML models that are fit for purpose and fair. This progress will be followed by the release of a White Paper and ISO standards, with industry-focused regulators working with businesses and data scientists to establish this assurance ecosystem.
Putting machine learning into operation
Many organizations that would benefit from AI are the least prepared to implement it. If you’re starting a new business, it’s imperative to be data-driven, but companies with a long history may have siloed data and IT systems that rely on legacy systems and workarounds. The financial services sector, for example, is weighed down by technical debt.
This presents an immediate obstacle to advanced analytics that can unlock profound insights into areas such as risk and customer retention, which are critical to banks and insurers. Even companies with mature data frameworks may struggle to implement meaningful AI. The challenge can be just as much cultural as it is technical.
Machine learning models are created by data scientists, who don’t necessarily have a background in enterprise IT. They develop these models using specialized tools and programming languages based on business requirements and test them to ensure they generate valuable results. However, what happens next?
It’s unrealistic to expect a typical IT department to know how to support these specialized tools or integrate the predictive models into regular workflows like an online customer journey. Data scientists cannot be expected to become expert system integrators either. However, it may be possible to provide them with a platform that streamlines the path from predictive models to production code.