Elon Musk doesn’t want Tesla to become just an automaker. He hopes Tesla will become an AI company. This found a way to drive the car yourself.
Important to that mission is Dojo, a custom built supercomputer designed by Tesla to train fully automated driving (FSD) neural networks. FSDs are not actually completely self-driving. You can perform some automated driving tasks, but you still need a polite person behind the wheel. However, Tesla believes that with more data, more computational power and more training, there is a possibility that from near-autonomous driving to fully autonomous driving could exceed the threshold.
And that is where the dojo was supposed to come.
Musk teased Dojo for many years, strengthening discussions about supercomputers throughout 2024. However, Dojo is currently out.
Below is a timeline of dojo mentions and promises. This Dojo explanator can read this explanation for more information on what it is, why it is important, and what comes next.
2019
The first mention of the dojo
April 22nd – On Tesla’s Autonomous Day, the automaker put the AI team on stage to talk about autopilot and fully autonomous driving, and the AI made both. The company shares information about Tesla’s custom built chips designed specifically for neural networks and self-driving vehicles.
During the event, Musk teases the dojo and reveals that it is a supercomputer for training AI. He also points out that every Tesla car produced at the time had all the hardware needed for fully self-driving, and only requires software updates.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
2020
Masks will start roadshows at the dojo
February 2 – Musk says Tesla will soon have more than 1 million connected vehicles worldwide with sensors, promoting the computing and Dojo capabilities needed for fully autonomous driving.
“As a training supercomputer, Dojo can process huge amounts of video training data and efficiently run hyperspace arrays with a huge number of parameters, lots of memory between the cores and ultra-high bandwidth.
August 14th – Musk repeats Tesla’s plan to “process a truly enormous amount of video data” and develop a neural network training computer called Dojo, calling it “beast.” He also says the first version of Dojo is “about a year away” and will set a release date around August 2021.
December 31 – Elon says Dojo is not necessary, but will improve self-driving. “It’s not enough to be safer than a human driver. Ultimately, an autopilot should be more than 10 times safer than a human driver.”
2021
Tesla makes Dojo official
August 19th – The automaker officially announces Dojo on Tesla’s first AI day. This is an event aimed at attracting engineers to Tesla’s AI team. Tesla also introduces the D1 chip. This says the automaker will use it alongside Nvidia’s GPU. Tesla points out that the AI cluster will house 3,000 D1 chips.
October 12th – Tesla releases “Tesla’s Configurable Floating Point Formats and Arithmetic Guide”, Dojo Technology Whitepaper. The white paper outlines the technical standards for new types of binary floating point arithmetic used in deep learning neural networks and used in “completely software, fully hardware, or software-hardware combinations.”
2022
Tesla reveals progress in the dojo
August 12 – Musk says Tesla will be “sequentially progressively at the dojo. There’s no need to buy that many incremental GPUs next year.”
September 30th – On Tesla’s second AI day, the company revealed that it had installed its first dojo cabinet and tested a 2.2 megawatt load test. Tesla says it was building one tile a day (consisting of 25 d1 chips). Tesla DeMos Dojo runs a stable diffusion model to create Ai-generated images of “Cybertrack on Mars.”
Importantly, the company has set target dates for the full Exapod cluster to be completed by the first quarter of 2023, and says it plans to build a total of seven Exapods in Palo Alto.
2023
“Longshot bed”
April 19 – Musk told investors in Tesla’s first quarter revenue that Dojo “may improve the cost of training by several orders of magnitude,” and that “it could become a sellable service offered to other companies in the same way that Amazon Web Services provides web services.”
Musk also states that he “sees Dojo as a kind of long shot bet,” but “it’s definitely worth the bet.”
June 21 – Tesla AI X account posts that the company’s neural network is already on its customer vehicle. This thread contains a graph with a timeline of Tesla’s current and projected power that will begin production of Dojo in July 2023, but it is not clear whether this refers to the D1 chip or the supercomputer itself. Musk says it’s the same day Dojo is already running tasks online at the Tesla data center.
The company also predicts Tesla’s computing will be in the top five globally by around February 2024 (no signs of success), and Tesla predicts to reach 100 Exaflops by October 2024.
July 19 – Tesla began production of Dojo in its second quarter revenue report. Musk says Tesla plans to spend more than $1 billion on Dojo by 2024.
September 6th – Mask posted to X, Tesla is restricted by AI training computing, but Nvidia and Dojo fix it. He says it’s extremely difficult to manage data from around 160 billion frames of video that he gets from a car per day.
2024
Plan to scale
January 24th – During Tesla’s fourth quarter and full year revenue calls, Musk once again acknowledges Dojo as a high-risk, high-remuneration project. He also says that Tesla was pursuing a “double path between dozia and dojo.” He points out that Tesla is expanding it and has “plans like Dojo 1.5, Dojo 2, Dojo 3, etc.”
January 26th – Tesla has announced plans to spend $500 million to build a dojo supercomputer in Buffalo. Musk then downplayed the investment somewhat and posted on X, saying that while $500 million is a huge amount, “It’s only one on Nvidia’s 10k H100 system. Tesla will spend more than Nvidia hardware this year.
April 30 – At TSMC’s North American Technology Symposium, the company states that Dojo’s next-generation training tile, D2, will place the entire Dojo tile on a single silicon wafer, rather than connecting 25 chips to create one tile.
May 20th – Mask says the rear of the Giga Texas Factory Extension will include the construction of a “ultra-dense water-cooled supercomputer cluster.”
June 4th – CNBC report reveals that the mask will deflect thousands of Nvidia chips reserved for Xa, Xai from Tesla. After initially saying the report is false, Musk posts to X, saying that Tesla had no place to send Nvidia chips. “It would have been sitting in the warehouse” for the continued construction of the southern extension of Giga Texas. He said the extension “houses a 50k H100 for FSD training.”
He will post again:
“Of the roughly $100 billion of AI-related spending, of the roughly $100 billion of AI-related spending that Tesla said to be creating this year, about half of the AI-related spending is internal and is mainly Tesla-designed AI inference computers and sensors and Dojo that exist in all cars. To build an AI training super cluster to build an AI training super cluster, Nvidia hardware is about 2/3, and the best guess at the current year.”
July 1 – Musk reveals in X that current Tesla vehicles may not have the right hardware for the company’s next-generation AI models. He says that with next-generation AI, when the number of parameters increases by about five times, “it’s extremely difficult to achieve without upgrading the vehicle’s inference computer.”
Nvidia’s supply challenges
July 23rd – During Tesla’s second quarter revenue call, Musk said demand for Nvidia hardware is “often difficult to get a GPU.”
“So I think this requires more effort on Dojo to ensure the training ability they need,” Musk says. “And we’re looking at the path to compete with Nvidia with Dojo.”
Tesla Investor Deck graphs predict that Tesla AI’s training capabilities will increase to around 90,000 H100 equivalent GPUs by the end of 2024, up from around 40,000 in June. Later that day on X, Musk posted that Dojo 1 will “train about 8k H100 equivalent online by the end of the year.” He also posts photos of a supercomputer that appears to be using the same refrigerator-like stainless steel as Tesla’s CyberTruck.
From the dojo to the cortex
July 30 – AI5 is about 18 months away from mass production, Musk says in a reply to a post from someone who claims to start a club “is angry that Tesla HW4/AI4 owners are angry that they are left behind when AI5 comes out.”
August 3 – Musk posts to X that he had made a walkthrough of Giga Texas’ Tesla Supercompute Cluster (also known as Cortex). He points out that it will be created with 100,000 H100/H200 NVIDIA GPUs with “large storage for FSD and Optimus video training.”
August 26th – Musk’s post XA cortex video. He calls it “a huge new AI training super cluster being built at Tesla HQ in Austin to solve real-world AI.”
2025
Dojo Shutdown, the team has disbanded
January 29th – Tesla’s fourth quarter and full-year 2024 revenue calls do not include Dojo’s mention. However, Cortex, Tesla’s new AI training supercluster at Austing Giga Factory, has arrived. Tesla noted that it has completed the deployment of the cortical, consisting of approximately 50,000 H100 NVIDIA GPUs on its shareholder deck.
“Cortex helped enable FSD (director) on the V13, boasting significant improvements in safety and comfort, including a 4.2x increase in data and a higher resolution video input.
During the call, CFO Vaibhav Taneja noted that Tesla accelerated cortical build-outs to speed up the deployment of FSD V13. He said he has accumulated AI-related capital expenditures, including infrastructure. In 2025, Taneja said he expects Capex, which is related to AI, to be flat.
July 23 – During Tesla’s second quarter 2025 revenue call, Musk is expected to be “operated on a large scale” sometime in 2026, saying “the scale is equivalent to about 100k H100.” Similarly, he suggested possible redundancy.
“When you think about Dojo 3 and AI6 inference chips, intuitively, you try to find convergence, which is basically the same chip,” says Musk.
July 28th – Tesla has signed a $16.5 billion contract to acquire the next-generation AI6 chip from Samsung. AI6 chips are betting on chip design, from powering FSD and Tesla’s Optimus Humanoid robots to high-performance AI training in the data center.
August 6th – Bloomberg reports that nearly 20 Dojo workers have left to launch their own company that builds AI chips, software and hardware called Decessai.
August 7th – Bloomberg reports that Tesla has disbanded its dojo team and shut down the project. Dojo lead Peter Bannon has also left the company.
Musk responded to X’s report, “It makes no sense for Tesla to split its resources and expand on two completely different AI chip designs. Tesla AI5, AI6 and subsequent chips are excellent at reasoning, and at least pretty good for training.
August 10th – “Once it was revealed that all the passes had converged to AI6, Dojo 2 became an evolutionary dead end, so we had to shut down Dojo and make tough HR choices,” Musk posted on X, his social media platform. “The Dojo 3 definitely lives on in the form of numerous AI6s (Systems-on-a-chip) on a single board.”
September 1st – Tesla shares Master Plan Part IV on social media platform X. There is no mention of Dojo or cortex, but more specifically, “physical AI” plays a central role.
This story was originally published on August 10th, 2024. The last update of the Tesla Jojo Timeline, released on September 2, 2025.