Intel has been using a fair bit of TSMC in their CPU manufacturing recently, yes. Most recently they’ve been assembling “tiles” of silicon from many process nodes into a single CPU package and IIRC they have been using TSMC for the GPU tiles.
Of course Intel has been designing and selling GPUs for years, I guess Lip-Bu means they're going to start manufacturing them as well? Or they're going to be data-center focused now?
Since he was touting that they recently hired a well-known GPU architect, it seems unlikely that this is merely about them using their own fabs for discrete GPUs instead of having integrated GPUs being the only ones they fab themselves. Some kind of shift in product strategy or reboot of their GPU architecture development seems more likely, if there's anything of substance underlying the news at all.
But this news is somehow even less comprehensible and believable than usual for Intel, whose announcements about their future plans have a tenuous connection to reality on a good day.
Intel has been making GPUs since the early 1980s, starting with the 82720, or the 82716 if you want to be picky and require a pure-Intel design. They announce a new GPU effort every few years, at about the time it's clear that the previous one has failed.
Again being picky, in theory their integrated graphics are a "success" in that they sell well, but that's because vendors get them for free with the CPU and so don't have to go through the expense of adding a discrete one.
I mean, they're a success in that even a weak discrete GPU is extremely overkill for the majority of people who just want to browse. You can only integrate that kind of card into another chip because the overhead of adding IO and another PCB is just too high for such a weak GPU.
It's also complicated by the notion that raster performance doesn't directly translate to tensor performance. Apple and AMD both make excellent raster GPUs, but still lose in efficiency to the CUDA's architecture in rendering and compute.
I'd really like AMD and Apple to start from scratch with a compute-oriented GPU architecture, ideally standardized with Khronos. The NPU/tensor coprocessor architecture has already proven itself to be a bad idea.
That may be true, but assuming you meant "within 30% of the performance" ... can we just acknowledge that is a rather significant handicap, even ignoring CUDA.
The customers are players that can throw money into the software stack, hell, they are even throwing lots of money in the hardware one too with proprietary tensors and such.
And the big players don't necessarily care about the full software stack, they are likely to optimize the hardware for single usage (e.g. inference or specific steps of the training).
Why doesn't AMD make a similar framework than CUDA? Is this so much of a task? But if that increases their market share that should be financially viable, no?
ROCm is their CUDA-like and imo it's been a buggy mess, and I'm talking bugs that make your entire system lock up until you hard reboot. Same with their media encoders. Vulkan compute is starting to recieve support by stuff like llama.cpp and ollama and I've had way better luck with that on non-nvidia hardware. Probably for the best that we have a single cross-vendor standard for this.
Exactly. CUDA is huge moat and all competitors must be adopting SOFTWARE first approach similar to what tinycorp is trying to do.
Find one single thing that makes CUDA bad to use and TRIPLE DOWN on that.
I which no one cares about. As a 1% player having a convoluted C++ centric stack when the 99% player has something different e ouch porting requires critical thinking means no one gives a damn about it.
ZLUDA has more interest that SyCL and that should say it all right there.
Intel focused on SyCL which not many people seem to actually care about. It looks far enough removed from CUDA you’d have to think hard about porting things as well. From what I understand ROCm looks very close to CUDA.
Good to hear. More than two players in the GPU market is a really good thing and their recent dedicated consumer GPUs are really good value in their segment. It will take a few generations until they might catch up to Nvidia, but I am hopeful. This is a good thing.
Intel started making and selling their own gpus many years ago, this news is just that they are going to fab the chips themselves, instead of outsourcing to TMSC.
It's a confusing article. It's strongly implies that Intel will make GPUs for data centers. It says Intel will produce GPUs without saying whether they are manufacturing them in house or not.
Intel has been designing GPUs manufactured on TSMC nodes across client and datacenter for at least the past 5 years. The client chips are price competitive but not performance competitive with AMD/NVIDIA/Apple. The data center roadmap has historically been a huge mess with cancelled products left and right. But, to say "Intel will start making GPUs" seems misleading. Perhaps "Intel to try to inject sanity into its GPU roadmap" would be a better headline, though I am skeptical one hire will do anything to fix 10+ years of mismanagement.
I have a B580 in my desktop. Unfortunately AMD still has broken PCIe level reset and so their GPUs don't work well for assignment to a VM, Intel and Nvidia cards both work fine.
The perf is fine - it was a $350 CAD GPU after all.
I am certainly interested to see where Intel ends up going with their lineup. Having a third player in the GPU space is definitely a good thing.
I have a B580 too. The cool thing about it is architecturally speaking it is basically a mini version of the Ponte Vecchio (PVC) datacenter GPU. You can run most of the datacenter GPU workloads, albeit scaled down to fit the compute/memory constraints of the B580. It's a great vehicle for software development. But you can't buy PVC anymore so it's unclear what you are developing for...
It is my understanding that this isn't happening in any meaningful capacity, they're simply using the kit no longer relevant to R&D.
I'm still not entirely convinced they actually did Arc themselves. It has all the hallmarks of a project that was bought or taken. Every meaningful iteration keeps getting pushed back further out towards the horizon and the only thing they've been able to offer in the meantime is "uhhhh what if we used two"
intel has been making graphics silicon since the 90s, the current discrete graphics effort has been going for at least a decade, and in areas like low power video decode and encode it could be argued intel is class-leading. the concept of the "GPU" is a quarter of a century old. this is an especially poor article, especially for a publication running as long as techcrunch.
The most rapid path that Intel has to selling competitive GPUs, would be to licence designs from Groq, and apply all effort to getting them working on 14a.
Hyperscalers would bite their hand off and would be a viable alternative to TSMC.
Nvidia has left the door open with the non-exclusive license in the acquisition
bhouston | a day ago
https://www.intel.com/content/www/us/en/products/docs/discre...
BadBadJellyBean | a day ago
alt227 | a day ago
mastax | a day ago
wtallis | a day ago
alt227 | a day ago
mrpippy | a day ago
Of course Intel has been designing and selling GPUs for years, I guess Lip-Bu means they're going to start manufacturing them as well? Or they're going to be data-center focused now?
wtallis | a day ago
But this news is somehow even less comprehensible and believable than usual for Intel, whose announcements about their future plans have a tenuous connection to reality on a good day.
searls | a day ago
alt227 | a day ago
rbanffy | a day ago
ginko | a day ago
https://en.wikipedia.org/wiki/List_of_Intel_graphics_process...
re-thc | a day ago
jamesgeck0 | a day ago
elephanlemon | a day ago
beAbU | 23 hours ago
pseudohadamard | 12 hours ago
Again being picky, in theory their integrated graphics are a "success" in that they sell well, but that's because vendors get them for free with the CPU and so don't have to go through the expense of adding a discrete one.
SR2Z | 6 hours ago
chrsw | a day ago
bhouston | a day ago
(Replaced "with 30%" with "within 30%")
bigyabai | a day ago
I'd really like AMD and Apple to start from scratch with a compute-oriented GPU architecture, ideally standardized with Khronos. The NPU/tensor coprocessor architecture has already proven itself to be a bad idea.
Asmod4n | a day ago
roysting | a day ago
epolanski | a day ago
And the big players don't necessarily care about the full software stack, they are likely to optimize the hardware for single usage (e.g. inference or specific steps of the training).
Qision | a day ago
4fterd4rk | a day ago
PrivateButts | a day ago
hipster001 | a day ago
yolostar1 | a day ago
nszceta | a day ago
bfrog | 19 hours ago
ZLUDA has more interest that SyCL and that should say it all right there.
bfrog | 23 hours ago
BadBadJellyBean | a day ago
roysting | a day ago
u1hcw9nx | a day ago
2016 Nervana. Intel would lead in AI training. The "Nervana NNP" was the future.
2019 Habana Intel announced the Gaudi and Goya chips as their new official AI strategy, effectively killing the Nervana project.
2021 Xe general HPC/AI GPU (Ponte Vecchio) Intel said they will be shifting to the "AI chip" market.
2023 The "AI PC". every consumer CPU would now be an "AI Chip" with NPU (Neural Processing Unit).
2024 Intel is now "AI Systems Foundry" to focus on making AI chips for other people (like Microsoft and Amazon).
2026 Intel will start making GPUs
netule | a day ago
usrusr | a day ago
RobotToaster | a day ago
eqvinox | a day ago
midnitewarrior | a day ago
perbu | a day ago
alt227 | a day ago
ytch | 21 hours ago
Zigurd | a day ago
alexbaden | a day ago
bpye | a day ago
The perf is fine - it was a $350 CAD GPU after all.
I am certainly interested to see where Intel ends up going with their lineup. Having a third player in the GPU space is definitely a good thing.
alexbaden | a day ago
monster_truck | a day ago
I'm still not entirely convinced they actually did Arc themselves. It has all the hallmarks of a project that was bought or taken. Every meaningful iteration keeps getting pushed back further out towards the horizon and the only thing they've been able to offer in the meantime is "uhhhh what if we used two"
re-thc | a day ago
Raja ex AMD / Radeon ran the project?
5G_activated | a day ago
andrewstuart | a day ago
That’s the spirit!
gilbertjolly | a day ago
Hyperscalers would bite their hand off and would be a viable alternative to TSMC.
Nvidia has left the door open with the non-exclusive license in the acquisition
yolostar1 | a day ago
seg_lol | a day ago
ch_123 | a day ago
https://en.wikipedia.org/wiki/Intel740
androiddrew | a day ago
cwillu | a day ago
Telaneo | a day ago
Martin_Silenus | 20 hours ago
cromka | 20 hours ago