This blog is based on an excerpt from the 2023 Platform Strategies opening keynote, which may be viewed in full here 

It's no secret that there are numerous areas where AI still falls short compared to a seasoned human expert in the field. However, it's worth noting that AI is rapidly surpassing average human performance on a wide range of standardized tests. While it's important to acknowledge these weaknesses, we should not let them overshadow the progress being made. As these systems continue to scale up, their weaknesses are likely to diminish, and the emerging capabilities are challenging our existing theories of intelligence. 

To illustrate this point, consider how traditional chess engines from the '70s, '80s, and '90s functioned. These engines used handcrafted algorithms and human-generated heuristics, combined with the raw power of computers, to explore possible moves and out-strategize human players. This approach was successful for chess, as demonstrated when Deep Blue defeated Kasparov in 1997, but it fell short with the ancient, complex game of Go, which has more possible permutations than there are atoms in the universe. 

This complexity made Go a benchmark for machine learning. Could a system use neural networks to learn this game? AlphaGo did just that by absorbing the sum total of games recorded by master players, learning and eventually surpassing all human players. The next step was AlphaZero, a system designed to learn any two-player board game without prior human knowledge. It started with the game's specific rules and played against itself to learn. Within 72 hours, it surpassed human-level performance, and it became the world's strongest Go player (including against other AI systems) in just 40 days. When applied to chess, AlphaZero mastered the game in just four hours and outperformed the reigning AI champion, Stockfish 9. 

These self-taught programs aren't limited by human players' conventional wisdom. For me, they represent a force as significant as fire, electricity, and the internet, which drastically reduced the cost of transmitting information. The marginal cost is now effectively zero. AI is reducing the cost of cognition.  

As we usher in a new era of generative AI, it seems we're on the brink of a period of hyper-acceleration. I believe this is driven by three factors that align with Kurzweil's Law of Accelerating Returns. 

Firstly, algorithm efficiency has seen a steady exponential rise over the years. The current efficiency is just a fraction of what's possible, suggesting that these models will continue to become more efficient. 

Secondly, the progress in hardware is staggering. NVIDIA, a leading AI hardware provider, reported a 1,000X improvement in AI computational power for AI applications over the last five years and confidently predicts a similar improvement in the next five years, which equates to a million-fold improvement in a decade. 

Lastly, the use of models within larger systems allows for an amplification of expressed intelligence and impact. Agents can reflect models back on themselves, chain models together, remember, execute generated code dynamically, and act directly on the outside world. This mirrors the iterative process of human thought, which involves reflection, evaluation, feedback, validation, comparison to historical trends, and more. 

 The acceleration we're now experiencing means that even these reflections will no doubt seem dated in mere months, so we look forward to continuing the conversation and exploration with you. 

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito