Andrej Karpathy, the deep learning and computer vision expert who was hired five years ago as Tesla's director of AI and led Autopilot vision team, is officially leaving the company.
Karpathy was on a four-month leave of absence, fueling widespread speculation as to whether he would return.
In a tweet posted Wednesday afternoon, Karparthy wrote, "It’s been a great pleasure to help Tesla towards its goals over the last 5 years and a difficult decision to part ways. In that time, Autopilot graduated from lane keeping to city streets and I look forward to seeing the exceptionally strong Autopilot team continue that momentum."
It’s been a great pleasure to help Tesla towards its goals over the last 5 years and a difficult decision to part ways. In that time, Autopilot graduated from lane keeping to city streets and I look forward to seeing the exceptionally strong Autopilot team continue that momentum.
— Andrej Karpathy (@karpathy) July 13, 2022
Karpathy said he had no concrete plans for what he might do next, adding that he looks to spend more time "revisiting my long-term passions around technical work in AI, open source and education."
Sources have previously told TechCrunch that Karpathy is considering some venture investing.
Karpathy's announcement comes as Tesla said in a California regulatory filing it was laying off 229 data annotation employees who are part of the company’s larger Autopilot team and shuttering the San Mateo, California office where they worked.
Prior to joining Tesla in 2017, Karpathy was a researcher at OpenAI, the artificial intelligence nonprofit backed by Elon Musk. He has an extensive background in AI-related fields and was the creator of one of the most respected, deep learning courses taught at Stanford University.
His role at Tesla, where he focused on the computer vision system built to support the Autopilot advanced driver assistance system, tied back to his previous dissertation work. In his dissertation, Karpathy had focused on creating a system in which a neural network could identify multiple discrete and specific items within an image, label them using natural language and report to a user. Notably, it included developing a system that works in reverse. This allowed for a model to use descriptions in natural language (e.g., "black dress") and find that object in a given image.