In the 1950s, a generation of scientists, mathematicians, and philosophers were fascinated with the concept of artificial intelligence (AI). Humans use available information to help solve problems and make decisions, so how can machines use that same learning process to become more intelligent on their own? Over the years, AI has become more advanced, is starting to become more widely implemented in businesses processes, and has become available to the mass consumer. What are the driving forces behind this new wave of technology and how could they be used in the future?
Early Artificial Intelligence
The earliest computers faced a problem – they could be told what to do but could not recall actions or store commands. Computers were also extremely expensive and in the early 1950s, the cost of a leasing a computer could be up to $200k a month. Advocacy for artificial intelligence was necessary to help convince high profile investors that machine intelligence would be worth the cost.
The Dartmouth Summer Research Project of 1956 is credited with initiating AI as a research discipline and coining the term Artificial Intelligence. Top researchers from various fields gathered for open ended discussions on artificial intelligence, which helped catalyze the next twenty years of AI research.
From 1957 to 1974, computers become faster, more advanced, and could store more information. Machine Learning (ML) branched off from AI in the late 70s as its own research subdivision, where computers could learn and adapt without following explicit commands. Before this period, ML principles were lumped in with general AI, but as algorithms improved, machine learning flourished alongside AI.
It was during this time that the largest obstacle blocking AI was a lack of computational power. Computers couldn’t store enough data or process the data quickly, which resulted in barriers to AI access. Computer costs were decreasing, but they weren’t powerful enough to accommodate the growing sector.
The AI sector was reignited in the 1980s as computational power increased. A boost of $400M in funding from 1982-1990 helped propel research efforts. Naomi Freundlich then wrote in February 1989 about “brain-style computers” and her experience at Columbia University with a computer that taught itself to pronounce English text overnight. Despite the absence of government funding and public hype, AI continued to grow. IBM’s Deep Blue, a chess playing computer beat the reigning world chess champion and grand master Gary Kasparov in 1997, serving as a huge step towards public adoption.
Artificial Intelligence Today and the Future
Today, we’re living in the age of “big data”. Computational power has surpassed our current needs, Web 3.0 has been increasing in popularity, and there has been a widespread adoption of AI for everyday use. From 2011 to today, speech recognition, robotics process automation, smart homes, and everyday uses for artificial intelligence have brought AI into our homes, businesses, and pockets. Nearly half of all businesses use data analysis, machine learning, or AI tools to address data quality issues, according to a 2020 survey by O’Reilly. Venture funding has caught up to the emerging technology, as the AI market size is projected to reach $86.9 billion in 2022.
It may be hard to predict what Artificial Intelligence may look like in the future, but Forbes outlined five predictions of how AI could be used in the next five to ten years. First, AI and ML could transform the scientific method, using computers to address a broader set of ideas than a human brain could computationally explore. AI could also become a pillar of foreign policy, as AI innovation could help improve economic resilience and geopolitical leadership in the U.S. AI could also enable the next generation of consumer experiences. Popularity of the metaverse and cryptocurrency are critically enabled by AI and could help transform the way people consume content. AI could also be critical to address the climate crisis, with prediction markets that could show the impact of environmental policies. Finally, AI could enable truly personalized medicine, which may allow patients to receive individually synthesized therapies for different diseases or conditions. AI has many possibilities, but also still has a long way to go before widely impacting these critical facets of our economy and society.
Putting it All Together
AI looks drastically different from when it was first coined in 1956. Just like computers have evolved over the decades, Artificial Intelligence is expected to evolve as widespread adoption occurs and venture funding continues. While we can’t be certain what AI will look like in the future, there are many promising use cases for AI across a variety of sectors and industries.
The information presented here is for general informational purposes only and is not intended to be, nor should it be construed or used as, comprehensive offering documentation for any security, investment, tax or legal advice, a recommendation, or an offer to sell, or a solicitation of an offer to buy, an interest, directly or indirectly, in any company. Investing in both early-stage and later-stage companies carries a high degree of risk. A loss of an investor’s entire investment is possible, and no profit may be realized. Investors should be aware that these types of investments are illiquid and should anticipate holding until an exit occurs.