Šta je novo?

Intel Larrabee = 32 core Pentium P54C

Licno iskustvo, odn. misljenje (nakon godinu i po aktivnog CUDA programiranja, i dosta godina prethodnog rada sa raznim paralelnim tehnologijama):

Mislim da ce Larabee, i pored svih mogucih prednosti, imati tezak zadatak da bude prihvacen od strane programera, bar za GPGPU stvari, ako budu insistirali na svom API-ju. NVIDIA se sa CUDA-om vec nametnula, nudeci jednostavan, lako razumljiv API (uvek se setim reci prof. W. Mei Whu-a sa UIUC, kod svojim studentima na pocetku CUDA kursa - video-snimci ovih predavanja su inace javno raspolozivi, i verovatno najbolji materijal za ucenje CUDA-e - kaze nesto tipa "CUDA API je toliko jednostavan da cu vas sada za sat i po nauciti da pisete paralelne programe", i onda u principu to stvarno i uradi); tako da je na kraju i OpenCL, kao neki prvi pokusaj standardizacije API-ja za programiranje many-core arhitektura, ispao veoma slican CUDA-i. Znaci, u ovom segmentu, po meni Intel ima sanse samo ako ponudi znacajno bolje performanse i skalabilnost, a uz postovanje standardnih API-ja. Rekao bih da slicna stvar vazi i za domen racunarske grafike: ovde je stvar sto se API-ja tice jasna, tako da je jedino pitanje koji nivo podrske za OpenGL/Direct3D ce odgovarajuci Larabee drajveri da ponude, i kakav ce biti odnos performansi Larabee solucija u odnosu na tekucu NVIDIA/AMD ponudu. Tako da bih sve u svemu rekao da je moje misljenje da Intel moze da bude veoma zadovoljan ako uspe da uzme deo kolaca na trzistu, i da ce biti vrlo tesko, bar u prvom koraku, da Larabee-jem jednim potezom posve eliminise konkurenciju, na svim segmentima trzista.
 
Како ће Лараби проћи у играма то не знам, доста тога зависи од тимова који пишу драјвере.
Што се тиче GPGPU-а, мислим да ће Лараби да одвали. Напросто, има далеко бољи рад са меморијом од постојећих GPU-ова (кохерентни кеш, дељену меморију за сва језгра, gather/scatter), и предикацију подржану инструкцијским сетом која ће елиминисати 99% if-then-else блокова у критичним петљама. Једва чекам да га избаце, купио бих га само да га програмирам :)
 
Што се тиче GPGPU-а, мислим да ће Лараби да одвали. Напросто, има далеко бољи рад са меморијом од постојећих GPU-ова (кохерентни кеш, дељену меморију за сва језгра, gather/scatter), и предикацију подржану инструкцијским сетом која ће елиминисати 99% if-then-else блокова у критичним петљама.

Uz ogradu da nisam preterano detaljno pratio najave kakva ce tacno da bude arhitektura Larabee-ja (jednom su me presli, sa Itanium-om, da se lozim na puno citanja unapred o necemu od cega posle nije ispalo nista), moram reci da ako su mu gore nabrojane stvari zaista aduti za GPGPU stvari, onda ce biti stvarno lepo ako to uspeju da naprave, ali ja sam pomalo skeptican. Puna koherencija keseva je enormno tezak problem odn. preciznije resenja su skupa odn. nedovoljno brza, to da moze da se napravi da bude skalabilno onda bi odavno imali 32-, 64- itd. multiprocesorske sisteme. Slicna stvar sa deljenom memorijom za sva jezgra. Postojanje scatter/gather interfejsa je pak kontradiktorno sa postojanjem deljene memorije za sva jezgra - ako npr. mogu da upisem podatak u deljenu memoriju, koju ce sva jezgra da vide, onda mi scatter ne treba. I na kraju predikacija - to je tek potpuno irelevantno, GPGPU primene se upravo odnose na "streaming" procesiranje, gde se jedne te iste operacije obave nad ogromnim brojem podataka, nema tu mnogo potrebe za if-then-else konstrukcijama, a hardver za predikaciju je upravo ono sto strahovito "jede" tranzistore ; pa citava poenta da GPU-ovi mogu da napakuju toliko jedinica za rad sa realnim brojevima u razmerno isti broj tranzistora na kojima CPU-ovi stave tek nekoliko tih jedinica i jeste u tome sto GPU-ovi, zbog prirode podataka koje procesiraju, nemaju potrebe za logikom za predikaciju, pa je prosto izostavljaju (a potpuno ista logika vazi kako za renderovanje tako i za tipicne GPGPU aplikacije)...

Sve u svemu: uz sve najbolje zelje Intel-u da zaista napravi nesto interesantno, ostajem rezervisan - sto moj sef ima obicaj da kaze "PowerPoint doesn't compile", znaci kad produkt, zajedno sa drajverima i SDK-om, bude gotov, tek onda cemo moci objektivno da pricamo da li se, i o kolikom tacno, prodoru radi. U medjuvremenu, CUDA (odskora i OpenCL) su tu, raspolozivi i sluze sasvim fino, znaci nema svrhe cekati nesto sto ce tek da dodje kad moze odmah fino da se zabavlja (i zaradi) pisuci paralelni kod za sasvim OK arhitekturu odn. API.
 
Како ће Лараби проћи у играма то не знам, доста тога зависи од тимова који пишу драјвере.
Што се тиче GPGPU-а, мислим да ће Лараби да одвали. Напросто, има далеко бољи рад са меморијом од постојећих GPU-ова (кохерентни кеш, дељену меморију за сва језгра, gather/scatter), и предикацију подржану инструкцијским сетом која ће елиминисати 99% if-then-else блокова у критичним петљама. Једва чекам да га избаце, купио бих га само да га програмирам :)

Hrabra izjava :)
 
Čini mi se da je Itanium dobio savršenog parnjaka - Larrabee.
Što bi rekli u jednoj nekada popularnoj TV špici: "Od njive do trpeze dugačak je put".
 
Also we noticed that the Larrabee GPU would measure a whopping 971 square millimeters if the chip was to be produced at 45nm.

E ovo je VELIKO :)
 
A i ujedno sigurno ce biti i preskupo, cirka 1000 eura.

Mislim da nije bas zgodno tako generalizovati. Kostace u skladu sa performansama, ili ga nece biti (kome bi ga prodali). Ako budu brzi (sto trenutno zvuci kao naucna fantastika) kostace vise, ako ne...

Pre dva dana se rasirila vest da je Larrabee navodno brz kao i GT200 (iako ne navedose u kojim primenama), a ako se stvarno pojavi tek za dve godine, takmicice se sa cipovima koji ce biti makar 100-150% brzi, a verovatno i vise. Jos kada dodamo izvesne probleme sa drajverima i drugim optimizacijama softvera, koje nece ici glatko, ovo bi za Intel moglo da bude daleko vece bure bez dna od Itaniuma.
 
intel je napravio veliku gresku sto nije nastavio razvoj i740 pre 10 godina jer chip je tada bio ekstra povoljan sa solidnim performansama...
a stvarno je sramota da se ta osnova vuce po integrisanim plocama vec 10 godina ;)
Larrabee i ako ne uspe bar ce uveliko pospesiti integrisane ploce narednih godina :p
 
http://www.pcper.com/comments.php?nid=7531

In a move that is surely going to raise some eyebrows, Intel has apparently decided to add a pair of HD video decoders to the logic on-board the Larrabee graphics chip; this according to a story over at SemiAccurate. This would definitely go against the whole premise of the Larrabee architecture: to do as much work as possible in x86 software without the need for dedicated hardware to any one specific task. Intel said from the beginning that textures required special hardware to run most efficiently but now that Intel has decided HD decoding is too inefficient on the platform, you can't help but wonder if there are other tasks with the same fate.

Ocekivano, ocekivano :)
 
hehe... stvarno su zabavni sa tim larabijem... :) i generalno sa x86... atom vs. ARM... x86 vs everybody :D
 
Poslednja izmena:
Pocece oni jednu po jednu komponentu iz Klasicnih grafickih da ubacuju u larabi
 
Larrabee 4 might be the one
Written by Fuad Abazovic
Thursday, 20 August 2009 09:19

Image

Late 2010 or later

Yesterday we reported here that Larrabee 3 is the one that Intel plans to show and possibly ship in 2010, but recent update indicates that Intel might try another core before it finally releases Larrabee.

Since Larrabee 3 is planned for middle of 2010, in case it doesn’t slips again, Larrabee 4 is being mentioned as a potential launch candidate. If Intel skips Larrabee 3 and decides to go for Larrabee 4, this will cause further delays and might push the release of Intel’s GPU by at least late 2010 early 2011.

With Intel’s high TDP they desperately need to go for smaller manufacturing process and 32nm sure looks better than 45nm that Intel currently uses. The question is can Intel start manufacturing Bulk 32nm product in second half of 2010, to meet this schedule.

For the time being, at least until mid of 2010, Larrabee is nothing Nvidia and ATI should be worried about, as its yet to be showed and launched.

Izgleda da ce biti odlaganja?
 
Larrabee:

08_090130155423.jpg


:d
 
Ovo je previse...:cigar:
 
Gotovo sam siguran da je ovo fejk :d:d:d
Zato što na PCB-u ne postoji konektor za ventilator!
 
Ima konektor odmah pored donjeg levog coska soketa :D jes mali ali je tu :D
 
Kad ovo krece u proizvodnju, oce li moci neki aftermarket qler?
 
A jel treba jedan ili 32 komada da zabodem u plocu? Gde da nadjem plocu sa 32 slota?
 
New Larrabee silicon taped out weeks ago
Plagues of bugs quashed
by Charlie Demerjian

September 16, 2009

IT LOOKS LIKE Larrabee, Intel's upcoming GPU++, is about to have new silicon in short order. The B0 stepping taped out about a month ago, so there should be some public showings soon.

B0 taped out on August 15, and the silicon in winding it's way through the fabs now. Even with Intel's manufacturing prowess, making those parts and bringing up the new silicon takes time. It is going to be a really close call on them making showable boards before IDF or not. Here's to hoping.

Word on the street is that Larrabee has a host of bugs which prevented public showings of the Ax silicon. This is not uncommon for a new architecture, and a new paradigm only adds to the pain. That said, almost all problems are said to be fixed in the move from Ax to B0 silicon. If schedules don't allow for the new stepping to be shown at IDF next week, expect a flood of demos at the next major conference.

From the looks of things, Larrabee is going to be pulled from a shroud of mystery and negative rumor into the light very soon. Only then will we see if it lives up to the hype or not. Game on in a matter of weeks.

Pat Gelsinger left Intel because of Larrabee fiasco?
9/18/2009 by: Theo Valich


Last week, we learned that Patrick P. Gelsinger will leave Intel for EMC and tried to find out the reason for the move. From one side, the move had perfect sense. Pat was one of Andy Groove's men, and Paul Otellini did his best to surround himself with his aces, thus the choice of Sean Maloney was logical.

But the underlying issue wasn't that Pat was "Andy Groove's men", the issue was the war with nVidia and under-delivering on Larrabee.

As we all know, Larrabee project has been problematic at best. Intel start hyping up Larrabee long before it was ready, and the project broke all deadlines. We read through roadmaps and watched Larrabee slip not by quarters, but by years. After we saw roadmaps for introduction of Larrabee pushed back all the way to 2011, and hearing that a lot of key industry analysts are dismayed at Intel - Pat's maneuvering capability was cut to a single corner.
A lot of people we talked to were disappointed at Intel "starting a war with nVidia without a product to compete", and after hearing statements such as "Intel is a chip company, not a PowerPoint company", it was clear to us that Intel seriously "screwed the pooch" on this one.

There is no doubt in our minds that Intel is going to deliver Larrabee, as it is the future of the company. But Intel will probably spend additional billion or so USD on making the chip work [because it is quintessentially broken in hardware, we haven't even touched the software side], and come to market with a complete line-up. But unlike the CPU division that only missed Lynnfield [Core i5-700, i7-800 series] roadmap by six months, project Larrabee is now a year late, and according to documents we saw, it won't reach the market in the next 12 months. This will put a 45nm Larrabee against 28nm next-gen chips from ATI and nVidia, even though we know the caveat of using 45nm Fabs for the job. According to our sources, in 2011 both ATI and nVidia will offer parts with around 5-7TFLOPS of compute power, surpassing 10TFLOPS on the dual-ASIC parts. According to information at hand, Intel targeted 1+ TFLOPS of compute power for the first generation, i.e. less number crunching performance than ATI Radeon HD 4870 and nVidia GeForce GTX 285. With Larrabee coming in 2011, the company did revise that number to raise available performance.

We learned about the estimated cost of Larrabee project, and if there wasn't for best-selling Core 2 series, this project would seriously undermine Intel's ability to compete. To conclude this article - Larrabee was Gelsinger's baby, project got seriously messed up and somebody had to pay the bill. Patrick is staying in Santa Clara though, almost on the same address. Given his new job, Patrick P. Gelsinger simply moved from 2200 Mission College Blvd [Robert N. Noyce building, i.e. Intel HQ], to 2831 Mission College Blvd [EMC HQ].

Malo novih vijesti :)
 
Pa s obzirom da je "work in progress", nije loše. Realne performanse su, naravno, nepoznate. Da je čip trenutno iole na nivou, Intel bi ga predstavio konkretnije. Ovako je još uvek vapourware :)

I onako, će "make or brake" ovog čipa biti drajveri, s obzirom na ima malo posvećenog hardvera za "grafičke stvari". Istorijat Intela sa grafičkim drajverima baš i ne uliva nadu.
 
neznam sta reci, nisam dovoljno kompetentan... ali ko zna raytrace mozda nije tehnika za sad ali kad GPU budu 10x jaci mozda ce biti stepenica ka vecoj realnosti....
intel ako nista drugo ne uradi bar ce pomeriti granice ka inventivnosti kao sto uvek radi
 
Hej, naravno da Intel fura raytracing, kada to izuzetno pogoduje Larabee dizajnu :)

Ljudsko oko ima razlučivost relativnu rezoluciji npr. 4096x4096.
A model of the perception limits of the human visual system was given, resulting in a maximum estimate of approximately 15 million variable-resolution pixels per eye
To znači da ćemo za oko dve generacije grafičkih čipova imati 60ak frejmova u gore pomenutoj rezoluciji. Ovaj stil renderinga nema mnogo prostora za povećanje vizuelne realnosti.
Raytracing i primena sve realnije fizike (npr. fizika materijala) su očigledni pravci razvoja.

Evidentno, Intel zna sve ovo, ali i AMD i nVidia...
 
Nazad
Vrh Dno