When did artificial intelligence get so clever – and what can humans learn from machines?

Despite its topicality, AI is not new

The beginnings of “artificial intelligence” can be traced to a conference at Dartmouth College in 1956. Attendees made up the leaders of AI research in the ensuing years. Initially, they predicted that they would be able to produce a machine with human level intelligence within a generation. As years and millions of dollars in government investment went by, it became clear that they had underestimated the complexity of the objective.

In the early 1970s Government investment was halted, leading to the first “AI winter” a period of lack of investment or progress in artificial intelligence.  A brief resurgence came in the 1980s but disappointing results halted investment once again at the end of that decade – the second “AI winter”.

True progress was made in the early 21st century with the increase and widespread availability of massive computing power and storage capacity for a reasonable cost.  This allowed for the “democratization of AI” allowing anyone to have access to the technology required to develop and deploy AI models.  From this time, anyone could develop AI algorithms, limited only by their knowledge and expertise…

To continue reading please download the full whitepaper:

When did artificial intelligence get so clever – and what can humans learn from machines?

About the author

Gareth Cameron

Director of marketing & communications

I am a marketer that has spent the past decade marketing and performing strategic functions across several business sectors.