It is fairly obvious that computers and artificial intelligence will run our world tomorrow, as we program these machines today. Interestingly enough, not long from now these AI machines will be programming themselves. How did we come so far so fast you ask? Well, maybe you need to do a little research for yourself.

If this topic interests you, then boy do I have a great book for you to read. It is a book that I own personally, and one I read a long time ago, but it still holds validity today, and many of the predictions of that past period, which is only two decades ago, although it seems like eons.This book is an extension of a highly controversial and ahead of its time MIT thesis by the same author.


This book is not for the non-intellectual, and he gets pretty thick into the details and philosophy of parallel computing. This book was written well before massive Internet use, just as the computer technology in Silicon Valley was really taking off. Indeed, this is one of those books which was the prime mover of the time.

This is why I have it in my library, and why I recommend it to anyone who is into artificial intelligence, computer hardware, future software, or where we are go from here; why you ask - because if the past is any indication of the future, things are getting get pretty interesting in the next decade. In fact, I hope you will please consider this, and educate yourself a little in the past, so you can understand how far we've come, how fast we've come, and where we go from here. Think on it.


“Mastering the Game of Go with Deep Neural Networks and Tree Search”


The history of AI has been marked by ambitious time lines for success followed by disappointments, so it was heartening news when a program developed by Google’s DeepMind group was able to defeat a champion-level Go player a full decade before such a feat was thought possible. Go had been viewed as the ultimate challenge for game-playing AI systems. But the researchers behind the program told reporters that the milestone was even more significant: “Our hope is that one day [our methods] could be extended to help address some of society’s most pressing problems, from medical diagnostics to climate modeling.”
Personal Challenge 2016: Simple AI

If your run-of-the-mill programmer declared a New Year’s resolution to build a virtual personal assistant it would not be news, but when the multibillionaire CEO of Facebook set himself that challenge for 2016, people took notice. Facebook has invested heavily in artificial-intelligence research, and Zuckerberg’s vision for a system “kind of like Jarvis in Iron Man” will build on the company’s recent advances in voice recognition. The hope is to control his home through simple commands and facial recognition so that, for example, friends and family can come and go without needing a key.

The Future of the Professions: How Technology Will Transform the Work of Human Experts


As expert systems become increasingly capable of doing things like providing medical and legal advice, drawing up building plans, and teaching students, the authors predict, these and other artificial-­intelligence technologies will affect white-collar professions in the 21st century in much the same way blue-collar work was transformed by automation in the 20th century. In anticipation of these changes, they propose a fundamental rethinking of how expertise is produced and distributed in society.
“Can This Man Make AI More Human?”

Instead of feeding computers reams of data in the traditional approach to artificial intelligence, NYU researcher Gary Marcus is attempting to train them to behave more intelligently by closely following the way infants and adolescents pick up concepts. Tech Review’s AI correspondent Will Knight chronicles how Marcus’s startup Geometric Intelligence is developing systems that are more flexible than traditional deep-learning algorithms in complex environments.

“Human-Level Concept Learning through Probabilistic Program Induction”

The Turing test is usually viewed as a conversational challenge for AI systems, but researchers at NYU, the University of Toronto, and MIT report that a new deep-learning algorithm can pass a visual Turing test by drawing the letters of the alphabet in a way that is indistinguishable from human writing. With their algorithm, the researchers have created a system that can learn from just a single example in a classification task, rather than the hundreds of examples machine-learning algorithms usually require.
Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots

In his latest book, Pulitzer Prize–­winning New York Times science writer John ­Markoff charts the rise of automation from the first industrial robots of the postwar era to the increasingly sophisticated machines ever more prevalent in our workplaces, public spaces, and homes. Markoff focuses particularly on the minds behind the machines at places like Google and Apple, exploring the dichotomy between those who seek to build robots to replace humans in certain tasks, like Andy Rubin, former head of robotics at Google, and those who aim to develop intelligent machines to augment human intelligence in day-to-day life, like Siri developer Tom Gruber.

“Our Fear of Artificial Intelligence”

Responding to ideas in Oxford philosopher Nick Bostrom’s 2014 book Superintelligence, writer Paul Ford looks at whether it’s reasonable to fear that runaway AI machines will become self-aware and act in their own interests. Some prominent members of the AI community argue that these anxieties are based on a fundamental misunderstanding of how close researchers are to achieving anything resembling sentient machines. But others argue that even if thinking machines are a long way off, researchers working toward that goal must anticipate problems and contain them if possible.

Open Letter on Autonomous Weapons

An open letter signed by more than 3,000 of the world’s top scientists and AI researchers calls for a ban on autonomous weapons that select and engage targets without human intervention and beyond meaningful human control. The letter writers acknowledge the potential advantages of removing humans from the front lines of war but argue that a “global AI arms race” in the coming decades would ultimately be bad for humanity.
“The Errors, Insights, and Lessons of Famous AI Predictions”

From the start, the AI field has been marked by a series of notable predictions about exactly when machines will exhibit something approaching human-level intelligence. This paper analyzes a few of the more famous predictions, beginning with the claim before AI’s founding conference at Dartmouth in 1956 that just 10 scientists could make “a significant advance” toward simulated intelligence over just two months. The authors go on to break down the ideas in Ray Kurzweil’s 1999 book The Age of Spiritual Machines into dozens of testable predictions for the year 2009, calculating a success rate of around 50 percent.

Our Final Invention: Artificial Intelligence and the End of the Human Era

This book by a longtime chronicler of AI research asks whether self-aware machines will be as benevolent as their engineers intend them to be. Noting that computer intelligence will inevitably be unpredictable and inscrutable to humans, Barrat argues, “We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans.”