Every few years, I invite readers and colleagues to contribute guest columns in the series Technology and my Hobby/Passion. Over a hundred contributed in the last decade on their birding, charities, cooking, music, sports and every other passion, and how it keeps evolving with technology. Click here and scroll down to read them all.
This time it is Nick Hortovanyi, from Down Under. He wrote 6 years ago in this guest series about his cycling. This time he writes about his new passion - self-driving cars. No, not just turning on Autopilot on a Tesla, he is coding, working with more precise GPS, LiDAR etc. Get ready for some geek talk, but hey, that's his passion. And we can all benefit from how much he has picked up long distance from online courses:
Sitting down one day, having a latte, I had one of those fateful decisions where I’d decided that my cycling startup idea just wasn’t going to work. It was time to start thinking about something new to do. But what? I had decided a while ago, I wasn’t going to get back into the corporate IT world. It was mainly consulting and sales. Most of it had to do with recording data then reporting on the data.
I like to get my hands dirty with technology, I like to code, I like to see the results of own hypothesis & analysis. More importantly I could see that the focus of the future would be analyzing tons of data from sensors using Artificial Intelligence. This data would be collected in real-time, enabling agents to interact with their environment.
The main reason I’d given up on my previous startup idea was that I couldn’t find a large enough market to warrant further investment. So after a European holiday, while I was contemplating what to do, I received an email about the new Self Driving Car Engineer Nanodegree (SDCAR ND) from Udacity. I applied and luckily, I was accepted. The acceptance email arrived on my birthday. So, I was really happy with that present.
The global market size for self driving cars will be likely be in the trillions of dollars. When that generation of car arrives, they will have the most disruptive effect on society, the likes of which we haven’t seen since the arrival of the first automobile. Some say we are today at the same stage as when the auto started to disrupt the horse-centric world of the early 1900s. To me this is really exciting. It represents a lot of potential at so many different levels.
So yes, I have found a massive market. But first I needed to brush up my skills, starting with math. I spent time going over trigonometry, calculus, matrices and Euclidean geometry via the Khan Academy.
When the Udacity course started, I jumped online and joined the Slack & Facebook groups. Everyone was really supportive and vocal on Slack. I loved it. I had found a whole group of people that wanted to learn about using the latest technology. I was in techie heaven. Later I found out that the first few classes were all the over achievers, and the communication died down in later groups. However, I met some amazing people online.
The Udacity Nanodegrees are about learning practical skills through hands on projects. So first off we learn a bit about the theory, and then do a project. Self driving is a really complex area but to get us started, we created our first lane detection algorithm.
After about a year, and three terms, I finished the SDCAR ND degree in the first cohort. It was such a great feeling. I ended up going over to the Bay Area for the graduation ceremony and met people like Sebastian Thrun (founder of Udacity, but more importantly the Stanford Professor whose team won the first Autonomous DARPA challenge and then later formed Waymo, a spinoff from Google). He is the one in blue jeans. David Silver of Udacity is also in the frame.
Encouraged by the SDCAR ND, I wanted to be more involved in this area. So I set up a little side project [sdcar.ai]. Its aim is to use a simulator to train the car's brain. The environment inside the simulator is created from vehicle sensor data. Some of my friends joke that that I’m creating my own Google Maps car ... well I am :).
I knew I needed to know more about AI, Robotics and Sensors. A self driving car, is really just a robot that can carry things like us humans and freight. So in the years since I’ve also completed the Robotics Software Engineer, Deep Reinforcement Learning (think DeepMind and AlphaGo) and recently the Sensor Fusion Nanodegree.
My quest to learn more grows as I learn more. What is frustrating is when you think you are 80% through you realize your are have likely only defined what needs to be done. I’ve heard that this is the time to spend 10x the amount of effort. In this regard it is very different from corporate IT projects.
You can expect to fail often and to have to reapply what you learned to the next iteration. This might involve throwing out a lot of code, and rewriting things when you’ve used the wrong sensors (I started with cheap Chinese ones) or the wrong technology framework. Sometimes things don't move as fast as you thought they would. So you might need a new programming approach and/or hardware. I attempted at first to capture data on a Raspberry Pi. That was a mistake for images (its IO constrained so could only write two to three images per second).
My current in-vehicle data collection system can collect images, with header and nanosecond timestamps at 30 HZ. Following is an example video, where I’ve used a YOLO Object Detector to classify what can be seen in each image.
YOLOV3 Object Detection Surfers Paradise Traffic Lights from Nick Hortovanyi on Vimeo.
Another example - not all GPSes are the same. Most are not very accurate. If you plot their data, you often end up on the sidewalk or the other side of the road. Other issues arise about providing live error correction data from a known base station. I have now moved to the much more accurate UBLOX ZED-F9P .
I have now done close to 30 projects via various Udacity Nanodegree courses. I have learnt so much. The majority of my work is available from my github repository.
Recently, I splurged and bought a 16 beam 360 Degree LiDAR from Robosense (LiDAR uses light waves from a laser similar to how Radar and Sonar use radio and sound waves). It is capable of reaching an object at a distance of up to 200 Metres (roughly 650 feet). The Sensor Fusion Nanodegree has given me the confidence to play with that. So that's what I working on next...
Comments