#GTC19 – Our Week
As usual, the 2019 Nvidia GPU Technology Conference was action packed, exciting, and highly memorable. Five days of trainings, sessions, amazing demos, and conversations with people who are changing the world with AI.
The major highlights for me this year were:
- Introduction of the new Nvidia Jetson Nano. Super small, super powerful. I got my hands on one and can’t wait to try it out!
- Generative Adversarial Networks (GAN) were a big topic, more so than last year. There were many sessions where speakers described using them to create synthetic data for everything from fashion design, pathology data to automobile damage.
- Geisinger Health discussed a study they have done on mortality prediction using over 20 years of patient health records. This blew my mind. A health system has 20 years of data in electronic format. The work they are able to do with that type of data will be truly stunning. Too bad we can’t all have access to that type and amount of data.
- The work BMW is doing with AI is beyond astounding. They use image processing to find defects on the outside of newly built cars. A car passes in front of 12 cameras for a total of 73 seconds. In that time, their pipeline can: (1) find defects such as scratches or dents, (2) determine if tolerances such as space between front door and front quarter panel are correct, (3) redact faces of employees that may have been in the image for confidentiality reasons, (4) determine if the car was built to the customer’s specs, such as correct wheels, headlights, etc, and (5) determine if the car is the correct color. That last part is not as easy as it sounds. They have 11 different shades of black alone. All this is used in a massive feedback loop where the data is used to help improve the production of cars as they are being built.
- Autonomous vehicles weren’t a hot topic this year. In a strange way, it seems like autonomous vehicles have sort of gone mainstream within the AI/DL/ML community (a good thing!) and the hype and energy is being replaced by other verticals… it’ll be interesting to see how this segment evolves over time, especially as large scale rollouts start happening
- Software to help find correct hyper-parameters in AI/ML models was huge. I think this is important because it can save Data Scientists a lot of time. It has to be taken with a grain of salt however. As with a lot of neural networks, it can truly be a black box. There is no real substitute for doing experiments to find correct learning rates, number of layers of networks, etc. However, these software packages can make it where less experiments are needed because you have a better starting point. With standard ML algorithms like logistic regression it can work nicely, but so can stepwise logistic regression.
My time at GTC this year was invaluable. The only problem with this conference was that it wasn’t two weeks long. Until next year #GTC!
Follow me on Twitter! @pacejohn