This system does 1000x more operations than IBM’S Deep Blue that beat Kasparov in chess in 1997. Deep Blue was a literal ton, the size of a refrigerator, and the M1 is a single chip with 16 billion 5-nm transistors, dozens of layers, has 175 TB of information in its design, and does some 11 trillion operations per second. Deep Blue did 11 billion.
They obviously have a team per each sections of the chip, like 23 sections... but what’s the 10,000 foot view on sitting down and “bettering the previous chip?” Question is generic to CPU design. Is more transistors the primary metric for speed increase? They know that every gate in their team’s section has to be high or low output at some clock cycle? They have more gates doing parallel processes?
Obviously computers are “designing” the new layouts, but the complexity is unfathomable to even keep track of. They’re splitting light into UV, bouncing off mirrors, and sending it through a puddle of water to create the lithographic template! This stuff doesn’t even seem believable as science fiction at this point.
They obviously have a team per each sections of the chip, like 23 sections... but what’s the 10,000 foot view on sitting down and “bettering the previous chip?” Question is generic to CPU design. Is more transistors the primary metric for speed increase? They know that every gate in their team’s section has to be high or low output at some clock cycle? They have more gates doing parallel processes?
Obviously computers are “designing” the new layouts, but the complexity is unfathomable to even keep track of. They’re splitting light into UV, bouncing off mirrors, and sending it through a puddle of water to create the lithographic template! This stuff doesn’t even seem believable as science fiction at this point.