As I treaded through the dynamic technology-driven world on my journey, I always developed an inherent likeness for the underlying phenomenon of Artificial Intelligence (AI). Through this article, I intent to share my insights on AI, differentiate it amongst its popular subfields – Machine Learning (ML) and Deep Learning (DL), and elucidate why AI is not just a technological marvel but a significant business asset. Lastly, I will touch on typical AI fears and advocate for a more enlightened and less apprehensive approach.
Defining AI
My Journey with AI
Understanding the basis of AI has taken me on a journey to learning what AI really means. AI, at the core of it, is the science of creating machines that can do things that would otherwise require human intelligence if done by humans. It includes capabilities such as reasoning, learning, problem-solving, perception, and language understanding.
When I attended the Executive Programme “Managing Artificial Intelligence” at LUISS Business School, I learned an interesting definition of Artificial Intelligence, namely the one provided by the Oxford Dictionary: “Software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data”.
Given that John McCarthy was the first to coin the term ‘Artificial Intelligence’, I believe it’s fair to recall his own definition of AI: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”
AI in Context
AI has its deep roots, tracing way back to the tasks done by mythical characters in ancient civilizations. The modern concept of AI, however, took root in the 20th century when pioneers like Alan Turing (London 1912 – Cheshire 1954) set premise by proposing the idea of machines that could be able to simulate human intelligence.
Turing’s most notable contribution is the concept of the Turing Machine, a hypothetical device that manipulates symbols on a strip of tape according to a set of rules. This concept, foundational in computer science, illustrated the principles of algorithmic computation and laid the groundwork for modern computers. Another significant contribution is the Turing Test, a criterion of intelligence in a computer, requiring that a human should be unable to distinguish the machine from another human being based solely on the conversation. This concept continues to influence AI development and philosophical discussions about machine intelligence.
During World War II, Turing played a pivotal role in breaking German ciphers, significantly contributing to the Allied victory. His work in cryptanalysis demonstrated practical applications of theoretical concepts, blending mathematics, logic, and computation.
Turing’s theories remain central in AI and computing. The Turing Award, often regarded as the “Nobel Prize of Computing,” honors individuals for significant contributions to the field, reflecting Turing’s enduring legacy. His ideas continue to inspire and guide contemporary AI research, underlining his status as a founding father of the field. Alan Turing’s visionary work in the early 20th century laid the cornerstones for what would become AI. His legacy persists, not just in the technologies we use today but in the very way we conceptualize and approach computational problems. Turing’s life and work remain a testament to the profound impact one individual can have on the future of technology.
“The imitation game” (2014) is a nice movie about the Turing’s machine based on the 1983 biography “Alan Turing: The Enigma” by Andrew Hodges. The film represents well the spirit in which Turing approached his personal challenge.
John McCarthy (Boston 1927 – Stanford 2011), often recognized as the father of AI, coined the term “Artificial Intelligence” in 1955. Unlike Turing, who laid the theoretical groundwork, McCarthy was instrumental in establishing AI as a distinct academic discipline. While Turing conceptualized machine intelligence and computational principles, McCarthy focused on creating programming languages (like Lisp) and environments conducive to AI research. This practical approach complemented Turing’s theoretical models, moving AI from abstract concepts to tangible experimentation.
Marvin Minsky (New York 1927 – Boston 2016), a contemporary of McCarthy, significantly influenced the field of AI through his work on neural networks and cognitive models. Turing envisioned machines that could mimic general human intelligence. In contrast, Minsky concentrated on specific aspects of intelligence, such as learning and perception, developing the concept of “frames” as data structures for representing stereotyped situations. This approach marked a departure from Turing’s broader perspective, focusing on specialized AI systems.
Dipping into the Subfields of AI
Machine Learning: Beyond Algorithms
What I’ve learnt is that Machine Learning is a subset of AI. It involves nowadays focusing on developing algorithms that would enhance machines learning and making decisions based on data. While in a traditional programming stance we would expect us to give very precise instructions, in an ML algorithm the decisions are made through learning from data about the patterns mostly and with minimum human intervention.
Deep Learning: Subset of a Subset
Deep Learning have aroused my interest in its ability to process and interpret large sets of data. Neurons networks were used to help understand a lot of unstructured information in DL. It is really suitable for complex tasks such as image or speech recognition.
AI in the Business Sphere
AI as a Business Ally
I saw businesses undergoing an AI transformation during my professional experiences. In the optimization of business operations, for improvement of customer experiences, and for opportunities to innovate in ways never before seen. AI-powered analytics allow predicting market trends, automating chatbots for customer service, and managing the supply chain efficiently.
There are several examples of companies or start-ups that have invested in AI to develop solutions that bring about significant improvements in productivity or quality of outcome. I like to emphasise that, beyond the financial gain from any investment, these benefits can also give a significant boost to sustainable development and the field of health care.
Tangible Business Wins
That there is so much potential for AI to transform the business practices from small startups to the large multinational companies is proof enough of its successful incorporation. It brings in cost savings and efficiency at the same time giving companies a competitive advantage. Here are some cases where the use of AI has brought (or is already bringing) significant benefits:
- MeVitae (London, United Kingdom) is a startup that uses data held by companies’ human resources to highlight imbalances in the employee recruitment and promotion process. Founded in 2014, based in London, its tools integrate with HR software, providing analysis on where candidates are being excluded or which employees are not advancing their careers. Using Artificial Intelligence, it suggests interventions such as anonymous recruitment to correct these disparities. The tools not only spot disparities in applicant success rates but also suggest various ways to address them. MeVitae’s DE&I analytics tool then allows organizations to measure the effectiveness of these efforts, helping to shape more informed DE&I strategies.
With a wide range of clients across finance, government, and technology MeVitae’s cutting-edge solutions have shown real results by using AI. Clients have seen a notable 30% increase in gender and ethnic diversity within their organizations, highlighting the powerful impact of MeVitae’s innovative approach in fostering positive change.
- Deep Genomics‘ AI-driven platform (Toronto, Canada) speeds up research by enabling healthcare professionals to rapidly identify candidates for drug development targeting specific disorders. The company leverages machine learning to pinpoint and track potential causes of genetic diseases. Their system can determine how a person’s genetic mutation alters their DNA and leads to disease. Even more impressively, the machine learning solution can analyze millions of potential medications in just a few hours, guiding researchers towards the ones most likely to be effective.
- JP Morgan Chase recognizes how crucial artificial intelligence is in transforming the financial landscape and has made significant investments in AI to leverage its advantages. The company has integrated AI across various functions, such as fraud detection, risk management, trading algorithms, and automating customer support.
These AI-driven solutions are helping to streamline operations, enhance decision-making, and offer personalized services to clients. Grasping the impact of AI on JP Morgan Chase’s efficiency and decision-making is key to understanding AI’s disruptive power in the financial services industry. In the past, the company’s error rate for commercial payments fraud was about 3%: with COiN’s analysis (COiN is “Contract Intelligence”, the JP Morgan Chase’s AI system) , this has dropped to around 1%, significantly reducing fraud losses. (Source: International Journal of Scientific Research & Engineering Trends – Volume 10, Issue 1, Jan-Feb-2024, ISSN (Online): 2395-566X)
JP Morgan Chase has made significant investments in creating advanced AI-driven risk management tools, too. One standout example is their Katana Lens platform, which uses state-of-the-art technology to give the company a strategic edge in proactively managing risks. Identifying risk events before they escalate resulted in a 42% drop in losses from those issues, saving more than $500 million just in 2023. (Source: International Journal of Scientific Research & Engineering Trends – Volume 10, Issue 1, Jan-Feb-2024, ISSN (Online): 2395-566X)
Demystifying Fear Factor in AI
The truth of the matter is that despite all the benefits that AI comes with, it’s been met with doubt and fear. Widespread fears regarding job displacement, invasion of privacy and out-of-control AI are there. Nevertheless, my understanding of the same places AI as nothing more than a measure for augmenting human beings rather than replacing them. Ethical values applicable in the development and functioning of AI include transparency, accountability, as well as protection of privacy.
CONCLUSION
My journey of understanding and applying AI has been (and will be) an enlightening one. AI, ML, and DL each with its unique capabilities are not just technological tools but catalyst towards business innovative growth. The informed enthusiasm, rather fear and denial towards AI will open the doors to a future where technology and humanity will co-exist and thrive together.