Key Takeaways from GTC2017 – The Brave New World of Artificial Intelligence

Artificial Intelligence
Artificial Intelligence
May 24, 2017

Key Takeaways from GTC2017 – The Brave New World of Artificial Intelligence

With earlier false starts and a lack of significant process, Artificial Intelligence (AI) has earned skepticism from those who have been in technology for more than a decade.

GTC2017 marked a shift in how the software and technology community views artificial intelligence.

At this year’s event, NVIDIA announced a rebrand to the “AI Computing Company”. The theme of the keynote, including supporting t-shirts, was “I Am AI”. The passion for AI was in the air to say the least.

So, what has changed?

First, the chips for parallel processing have become cheaper and smaller, while becoming more powerful and efficient. Storage has increased in capacity while the cost continues to decrease; The final piece enabling the breakthrough is the availability of “Big Data”. The beginning of the resurgence began in 2012 when a researcher, Alex Krizhecsky, created a deep neural network that automatically learned to recognize images after training on more than one million examples. With only a string of training days using General Processing Units (GPUs), “AlexNet” beat all the human expert algorithms that had been continually refined for decades.

Deep Learning and Artificial Intelligence

Deep Learning is at the core of the AI re-awakening. Most people unknowingly use deep learning applications everyday with applications like Siri and Alexa or recommendation-based tools through Netflix, Amazon and eBay. Perhaps the most visible example today is the rise of self-driving cars. In 2014, there were 1,500 artificial intelligence companies. Today, that number has exploded to 39,000.

Dominating the exhibit floor at GTC2017 was a large John Deere tractor towing a “See & Spray” machine developed by Blue River. Using computer vision to identify each individual plant, this machine applies herbicides to only the weeds while simultaneously supplying water and fertilizer directly, only as needed, to the crop. One of the smallest examples is the Teal Drone: the world’s fastest production drone that also contains a powerful computing platform. It supports command and control, live video streaming, waypoint navigation, geo-fencing, app store and a SDK. For example, this drone could be used to check on the baby in another room or your kids playing in the park, and even tell them to come home. It could also be used to investigate and relay pictures of why an alarm system triggered.

Making Virtual Reality a Reality

Moving to the keynote, NVIDIA CEO Jensen Huang announced the Holodeck collaborative Virtual Reality (VR) project. The demo showed four engineers spread across the world examining a model of the Koeningsegg Regera, a $1.9 million dollar supercar. From opening the car doors to sitting in the seat and adjusting the steering wheel or air-vents, the engineers could interact with the car. Because of the details of the digital rendering (50 million polygons, according to NVIDIA), the users can apply transparency filters and look inside the shell of the car to view the interworking of all the mechanical parts. Not quite the holodeck from Star Trek, but enabling users to consult on design changes in real time is the first step.

Another feature of virtual worlds is the malleability of time.

Training a robot in the real world can take a considerable amount of time due to the number of iterations required to develop the necessary skills. In a virtual world, numerous robots can be trained simultaneously with the “brain” from the smartest robot then instantiated into all the other robots at the beginning of the next round. Once the brain has completed its virtual training, it is programmed into the robot. When the robot wakes up, it has all the “memories” of its training and only requires a bit of adaption in the real world to achieve its task. Exploiting the virtual world, NVIDIA has created the Isaac Robot Simulator to speed up time to market. This simulator works with virtual sensors and can connect to the OpenAI Gym, a reinforcement-based learning system.

Driving the Future of Artificial Intelligence and Deep Learning

Working with numerous partners and other cutting-edge companies, NVIDIA is continuing the push for autonomous vehicles. By 2020, Airbus and NVIDIA plan to begin field-testing of a self-flying airplane. Partnered with Toyota, NVIDIA demonstrated a “Guardian Angel” feature. After stopping at a red light, the driver attempts to accelerate when the light turns green. The car “sees” another car is running the red light and disables the accelerator preventing a terrible accident. At the GTC2017 conference, NVIDIA announced the Xavier System on Chip (SOC) consisting of ARM CPU cores, Volta GPUs cores and a fixed function Deep Learning Accelerator (DLA). Advertised as performing 30 trillion deep learning operations while only consuming 30 watts, Xavier’s initial implementation is for autonomous vehicles. But, with NVIDIA’s decision to open-source the design, it should spread rapidly across the Internet of Things (IoT).

Technological barriers have been breached and Moore’s Law no longer applies to computational power. Artificial intelligence has long been the final frontier, but technology is now racing “to boldly go where no man/machine has gone before”. Such advances will change and improve our lives in ways we have not even begun to imagine yet. The trick will be to constrain it to just before Skynet becomes aware.

Artificial Intelligence (AI)

Simulating human intelligence for defense and aerospace

Tammy Carter

Tammy Carter

Senior Product Manager

Tammy Carter is the Senior Product Manager for GPGPUs and software products, featuring OpenHPEC for Curtiss-Wright Defense Solutions. In addition to an M.S. in Computer Science, she has over 20 years of experience designing, developing, and integrating real-time embedded systems in the defense, communications, and medical arenas.