By now, the discussion over whether artificial intelligence (AI) will replace or assist humans is not new, but it is highly nuanced. Daniel Hook, CEO of Digital Science, spoke about the preconceived biases, history, and advancement of AI in September of 2019 at the Platform Strategies event. (View the full recording of his talk here.)

Daniel Hook

As AI progresses, most noticeably in China with their facial recognition advancements, we are left to wonder whether this technology is capable of creative, self-sustained thinking, or is inherently driven by its creators and the way it is written. In his talk, however, Hook notes that there is a flipside: intelligent augmentation (IA) is the concept of using this technology to coexist and assist humans rather than replace them. Can humans work harmoniously with these technologies, resulting in another Digital Revolution? Or will AI beat us in a bigger game of proverbial chess and drive its own prelation? 

Hook outlines the long history of AI and its sudden refocus: “We saw the first chess computers probably in the '50s and '60s. We saw them become mainstream in the '70s and '80s. And by 1997, Garry Kasparov was beaten by a chess computer. However, really interestingly, Garry Kasparov was not beaten comprehensively by a chess computer.” The team of software designers at Deep Blue in the ‘90s didn’t program their computer to just beat any chess player, they studied Kasparov’s techniques and built the computer to predict his moves and in turn out-play him at his own game. This touches on the point that in the early days, AI was only what it was because of the humans and the hefty research behind it and its programming. 

Fast forward to recent times, where the focus has shifted to algorithmic prediction: “What happens in 2016 is that the algorithms, the computer power and the amount of data that we had [catches] up with the promise of the algorithms that we've had for 40 years. And that's really the sea change. [Much in] the same way that the Space Race started in the 1960s and the late 1950s, when the Russians launched their orbiting spacecraft around the world. If you look at Chinese investment into research in AI you see a complete sea change in 2016. The amount of research funding that is now going into AI and into innovation around the translation of AI is actually scary.”

Humans inherently have doubts surrounding self-thinking technology. Hook touches on our commitment to continuity in regards to these emerging technological advancements and published writings: “The fact that the paper hasn't really changed in 350 years shows you how-- not actually the people in this room, but the people we serve-- are so ensconced in the idea of the printed thing that they hold. Tangibility and physicality are tremendously powerful physical relationships that we [humans] have. And actually getting over those relationships are really challenging if you're a technologist and you want to move things forward. So, the way I think about where we are right now is that we are effectively tool providers. We are trying to get people to move from this state--where they are gradually using our tools and hopefully, augmenting their intelligence by tool usage--to the stage where we actually move into the next phase, and we start seeing tools move in another direction. We need to become more efficient. We need to broaden and diversify the things that we bring to people, and a lot of that can be done through augmentation.”

Thinking back on how we were taught to use tools growing up, Hook uses an analogy from his own childhood: “I remember when I was at school, I had to learn both [how to] use a calculator and [how to] do the calculation in my head. And if I couldn't do the calculation in my head, then I wasn't allowed to use the calculator. And so, it was only by understanding how the calculation worked that one got to play with the calculator. I think we, as a society, are a little bit in this place right now with AI. When you think about deep learning, and you start to understand deep learning technologies, the very structure of the deep learning ecosystem seems to be [one] that there is a black box into which one cannot look.” We’re reaching a tipping point, and it’s becoming evident that the doubt in AI will remain instilled and questioned until we are, as a society, able to understand its ability to aid us. This isn’t an unfounded notion, it’s based on trusting the researchers behind these developments and their ethical standings. As Hook says, “...this could be a very positive technology, but it's certainly one where having a blind thing that just completely believes the researcher and what the research is doing could be something where there are ethical [questions] that are coming into play where we don't completely see what's going on.”

While Hook notes his concerns in the ethics behind AI, he asks listeners to consider the inevitable advancements that we’re steadily heading towards. There is a balance to be considered, one where we work with this technology in unison, using it to further our research while it adapts and grows with us. “I'd invite you to think what the really game-changing tools are that are coming out right now, and that you can imagine coming out in the future. [These] are intelligent augmentation devices for us, rather than either replacements for us or things that we should be worried about.” For Hook, our advancement as a society is undeniably, and interchangeably, dependent on the advancement of AI and IA tools. 

by Clio Angle, Freelance Writer

1993 1999 2000s 2010 2017 calendar facebook instagram landscape linkedin news pen stats trophy twitter zapnito