At the Hot Chips conference, Google gave an impressive example of what the self-developed TPU AI accelerator is used for: the chip helps, among other things, to develop its own successors.
What sounds a bit like dystopia has a logical background: AI models thirst for everything they can get in terms of performance during training – but the development of new chips is becoming more and more complex and time-consuming. At the Hot Chips, which is currently taking place on the campus of Stanford University in California, Google has divided the at least three years of development time into four broad blocks: conception (6 to 12 months), implementation of the design selected in pre-stage (12 months), Tape-out at manufacturing partner (6 months), start of mass production (12 months).
Faster and better
The last two blocks can hardly be accelerated and tend to be longer: In modern process sizes, it takes 3 to 4 months for an empty wafer to become one from which finished dies can be sawn – plus downstream steps such as packaging. When it comes to implementation, however, AI can massively accelerate things like macro or block placement: Experts need 6 to 8 man-weeks to assemble a TPU block intended for the next generation, while an AI has completed the same task in 24 hours.
Despite the lack of focus, you can see that the layout chosen by the KU (right) differs significantly from handmade circuits (left).
Not only that: the layout chosen by the AI also manages with a cable length that is almost 3 percent shorter. Although Google only presented very blurred images as proof, it can be clearly seen that the AI has chosen a rather rounded, organic layout, while the man-made distribution is clearly symmetrical.
One level higher, namely in the compilation and linking of several blocks, the AI was ahead: out of 37 blocks, the AI placed 26 better than a human and another 7 at least equally well – only 4 were worse. (mue)
To home page
#Google #TPU #chip #calculates #successor