Šta je novo?

Fermi is Nvidia's DirectX 11

Status
Zatvorena za pisanje odgovora.

psyman

Slavan
Učlanjen(a)
23.07.2007
Poruke
558
Poena
319
Q4 2009

Nvidia is working on what looks to be its first DirectX 11 card and as it was the case before, Nvidia will start with DirectX 11 in the ultra high-end and it will pass it on to slower cards at a later date. The codename that we've heard is a quite logical one, GT300, and this card will help Nvidia to fight and eventually take the performance crown in ultra high-end market.

We do know that ATI should have its high end DirectX 11 at a similar date, and in the meantime both companies will focus more on cheaper cards in 2009, as the year of the Ox will probably be a good year for selling cheaper and more affordable stuff. All of the cards to launch in next three quarters in mainstream and entry-level will stick with DirectX 10 or 10.1, depending who are you talking about.

Many of you know that high-end helps selling entry level and mainstream, and it is rather important who wins this round, but it is still way too early to tell.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=11666&Itemid=1
 
Poslednja izmena od urednika:
nVidia's GT300 specifications revealed - it's a cGPU!

Over the past six months, we heard different bits'n'pieces of information when it comes to GT300, nVidia's next-gen part. We decided to stay silent until we have information confirmed from multiple sources, and now we feel more confident to disclose what is cooking in Santa Clara, India, China and other nV sites around the world.

GT300 isn't the architecture that was envisioned by nVidia's Chief Architect, former Stanford professor Bill Dally, but this architecture will give you a pretty good idea why Bill told Intel to take a hike when the larger chip giant from Santa Clara offered him a job on the Larrabee project.

Thanks to Hardware-Infos, we managed to complete the puzzle what nVidia plans to bring to market in couple of months from now.
What is GT300?

Even though it shares the same first two letters with GT200 architecture [GeForce Tesla], GT300 is the first truly new architecture since SIMD [Single-Instruction Multiple Data] units first appeared in graphical processors.

GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.

GT300 itself packs 16 groups with 32 cores - yes, we're talking about 512 cores for the high-end part. This number itself raises the computing power of GT300 by more than 2x when compared to the GT200 core. Before the chip tapes-out, there is no way anybody can predict working clocks, but if the clocks remain the same as on GT200, we would have over double the amount of computing power.
If for instance, nVidia gets a 2 GHz clock for the 512 MIMD cores, we are talking about no less than 3TFLOPS with Single-Precision. Dual precision is highly-dependant on how efficient the MIMD-like units will be, but you can count on 6-15x improvement over GT200.


This is not the only change - cluster organization is no longer static. The Scratch Cache is much more granular and allows for larger interactivity between the cores inside the cluster. GPGPU e.g. GPU Computing applications should really benefit from this architectural choice. When it comes to gaming, the question is obviously - how good can GT300 be? Please do bear in mind that this 32-core cluster will be used in next-generation Tegra, Tesla, GeForce and Quadro cards.

This architectural change should result in dramatic increase in Dual-Precision performance, and if GT300 packs enough registers - performance of both Single-Precision and Dual-Precision data might surprise all the players in the industry. Given the timeline when nVidia begun work on GT300, it looks to us like GT200 architecture was a test for real things coming in 2009.

Just like the CPU, GT300 gives direct hardware access [HAL] for CUDA 3.0, DirectX 11, OpenGL 3.1 and OpenCL. You can also do direct programming on the GPU, but we're not exactly sure would development of such a solution that be financially feasible. But the point in question is that now you can do it. It looks like Tim Sweeney's prophecy is slowly, but certainly - coming to life.

source
 
Preduhitri me :)

Ako bude ovako,tesko se pise Larabee-ju :)
 
GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.
Ovakvo nesto se i ocekuje.

This is not the only change - cluster organization is no longer static. The Scratch Cache is much more granular and allows for larger interactivity between the cores inside the cluster. GPGPU e.g. GPU Computing applications should really benefit from this architectural choice. When it comes to gaming, the question is obviously - how good can GT300 be? Please do bear in mind that this 32-core cluster will be used in next-generation Tegra, Tesla, GeForce and Quadro cards.
Ovo bi trebala biti "CUDA dijeljena memorija", odnosno uvecana granulacija u cilju postizanja sto manje shared memory bank konflikata?

Given the timeline when nVidia begun work on GT300, it looks to us like GT200 architecture was a test for real things coming in 2009.

Otprilike!
Bit ce zanimljivo vidjeti kakav ce biti ATI odgovor.
 
Nagradno pitanje: Sta je MIMD?
 
In computing, MIMD (Multiple Instruction stream, Multiple Data stream) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD architectures may be used in a number of application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling, and as communication switches.:zgran:
 
Evo nam ga novi "8800gtx":),crysis-u ...ebo si.....
 
Bit ce zanimljivo vidjeti kakav ce biti ATI odgovor.

rv870%20kopie.jpg


Launch: July 2009.

http://news.ati-forum.de/index.php/...-vorrausichtliche-spezifikationen-aufgetaucht
http://www.hardware-infos.com/news.php?news=2908
 
Poslednja izmena:
Biće žestoka borba uskoro, jedva čekam! Poz za sve!
 
Kada ce se pojaviti ili kada se ocekuje da ce se pojaviti?
 
Krajem godine.
Izgleda da ce ih nVidia pocistiti ove godine.

I meni se tako cini, posto ide prica da ce ATi imati 240 (1200 efektivnih) sejdera, do sad su imali 160 i bili su sporiji od nVidijinih 240, a sad ce da ide Ati 240 vs nVidia 512, tako da ako Ati ne napravi neko cudo, bice gusto :)
 
Ati ce verovatno ici na to da ima 70-80% brzine GT300 za pola cene istog i samim tim napravi josh jedan odlichan mainstream proizvod. Medjutim, bojim se da samo budzenje sadashnje arhitekture nece doneti takve performanse. Ovaj GT300 zvuchi zaista nabudzeno, ali mi nekako govori o tome da ce Nvidija produziti da pravi skupe i velike chipove...
 
In computing, MIMD (Multiple Instruction stream, Multiple Data stream) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD architectures may be used in a number of application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling, and as communication switches.:zgran:

Vrlo siroka definicija MIMD-a pod koju spada brdo procesora - izmedju ostalog i nVidia G80 (izvrsava razlicite instrukcije nad razlicitim podacima).
Stvar je u tome da se u tekstu navodi "sve do sada je bilo SIMD, a GT300 ce biti MIMD" bez ikakvog dodatnog objasnjenja. Pri tome ni SIMD ni MIMD nisu tacno definisani pojmovi vec vise generalni koncepti. Jos gore je sto se zatim pocnu uvoditi novi termi koji opet nisu tacno definisani (SPMD, SIMT, MPMD...). I kako onda zakljuciti sta je pisac hteo da kaze?

Jedan moguci odgovor je da ce GT300 doneti bolje resavanje grananja kroz dinamicko pakovanje threadova, sto bi bilo vrlo znacajno.
512SP-a deluje realno za 40nm proces.


Ovo je napisano pre nego sto je objavnjen model programiranja na Larrabee-u. Zbog toga je covek pretpostavio da ce Larrabee koristiti isti princip kao SSE sto se ispostavilo netacno, pa tekst treba uzeti sa rezervom.


Gledajuci RV740 kao prvi primer sta moze da se napravi na 40nm, ove specifikacije deluju vrlo realno. S druge strane, gledajuci kako su prosli RV770 rumori, ko zna sta je realno :) Jedine pouzdane informacije mozemo ocekivati od chiphell.
 
GT 300 je najavljen za kraj godine, a ATIjev R800 je najavljen za jul ove godine. Interesuje me bas da li ce i nVidia izbaciti nesto tad, jer sumnjam da ce da dozvole ATIju 4 meseca da ima bolju graficku kartu od njih. Morace nesto kao odgovor da naprave.
 
U prvi mah očekujem da NVIDIA ima veliki broj problema sa drajverma i to ne samo kad su bagovi u pitanju, nego i kompatibilnost sa manje poznatim naslovima, a onda i performanse. Mnogo će biti bolje kupiti drugu GT MIMD generaciju, nego prvu, a treba videti i šta će da ponudi ATI... Od Larabija ništa još uvek ne očekujem...
 
GT 300 je najavljen za kraj godine, a ATIjev R800 je najavljen za jul ove godine. Interesuje me bas da li ce i nVidia izbaciti nesto tad, jer sumnjam da ce da dozvole ATIju 4 meseca da ima bolju graficku kartu od njih. Morace nesto kao odgovor da naprave.

Ako Ati bude dominirao tih 4 meseca (tj zaradjivao :D ) sigurno ce spustiti cene, pred izlazak GT300, a kakve ce onda cene nVidia postaviti, zivo me interesuje...
Nema sumnje da ce GT300 shishati RV800, ali sta to vredi ako Ati bude nudio 2.3 TFlop-a za 150E...

POZ
 
Nekako mi se čini da će oba tabora izaći na tržište otprilike u isto doba, rv790 je (pre)friško na tržištu.
GT300 u 40nm ... veoma nategnuto s obzirom da još nema ni "lowest end" modela na tom procesu. Kladim se da će biti 55nm megačip, kao G80 ili GT200, pa će onda eventualno doći GT300b.
E da, čudno kako NV ne mijenja svoju paradigmu što se tiče megačipova. ATI-jev recept se do sad pokazao dobar.
 
Poslednja izmena:
Uvek bolja varijanta, jedan cip.Makar ga zvali i "mega" ili "glomaznim" ili kako-god.To sto konkurencija mora da uparuje dva cipa ne bi li dostigla ili za dlaku prestigla taj jedan, tesko da je dobar recept u mojoj knjizi:p
 
Ovo je napisano pre nego sto je objavnjen model programiranja na Larrabee-u. Zbog toga je covek pretpostavio da ce Larrabee koristiti isti princip kao SSE sto se ispostavilo netacno, pa tekst treba uzeti sa rezervom.

Sve su to pretpostavke kada je u pitanju Larrabe, koji je nepoznanica. Greg u svom izlaganju objasnjava sta je MIMD, i iznosi svoja iskustva.

I vjerovatno mislis na LRBni - Larrabee new instructions?

U prvi mah očekujem da NVIDIA ima veliki broj problema sa drajverma i to ne samo kad su bagovi u pitanju, nego i kompatibilnost sa manje poznatim naslovima, a onda i performanse. Mnogo će biti bolje kupiti drugu GT MIMD generaciju, nego prvu, a treba videti i šta će da ponudi ATI... Od Larabija ništa još uvek ne očekujem...

Slicna prica je bila i za G80. GT300 se ne smije toliko razlikovati od danasnje arhitekture.
 
Slicna prica je bila i za G80. GT300 se ne smije toliko razlikovati od danasnje arhitekture.

Po ovome što sada navode, prilično će se razlikovati... Kao nijedan GPU do sada, u odnosu na prethodne. Ovo je sada praktično full multipurpose CPU :)
 
Po ovome što sada navode, prilično će se razlikovati... Kao nijedan GPU do sada, u odnosu na prethodne. Ovo je sada praktično full multipurpose CPU :)

Sve ce to ici svojim tokom, evoluirati.
Ono sto bih ja licno volio da se desi je da M$ izgubi dominaciju sa svojim DX-om, odnosno da developeri pocnu pisati vlastite GPGPU programske jezike. Ovakva arhitektura im to omogucava.
 
Teshko da ce se to skoro desiti... Problem je u tome shto je DX standard koji moraju da podrzavaju svi proizvodjachi hardvera i svi proizvodjachi softvera. Setite se shta je bilo do DX-a - svaka igra je morala da se podeshava da bi radila sa hardverom, neki hardver nije podrzavao odredjene igre itd.

Zamisli da sada krene svaki razvojni tim da pishe svoj jezik i da tera na njemu hardver - pa trebalo bi im sto godina da nateraju to da radi na svim grafichkim...
 
Teshko da ce se to skoro desiti... Problem je u tome shto je DX standard koji moraju da podrzavaju svi proizvodjachi hardvera i svi proizvodjachi softvera. Setite se shta je bilo do DX-a - svaka igra je morala da se podeshava da bi radila sa hardverom, neki hardver nije podrzavao odredjene igre itd.

Zamisli da sada krene svaki razvojni tim da pishe svoj jezik i da tera na njemu hardver - pa trebalo bi im sto godina da nateraju to da radi na svim grafichkim...

Tu upravo upada full programabilna arhitektura kakvu ce gurati Nvidia i Intel. Znaci imat ces mogucnost da napises rendering engine kroz CUDA ili OpenCl koji ce se direktno izvrsavati na GPU. DX i Windows prestaju biti vazni.

Jezici koji direktno komuniciraju s hardverom ce vjerovatno biti CUDA, OpenCL, i Larrabee C/C++, a na developeru je da odluci kakav rendering engine ce da pise. Normalno, i dalje ce postojati podrska za DX i OpenGL.

Sve zavisi od toga ciji proizvod ce zazivjeti?
 
Jel vi mislite da tek tako moze da se napise engine na CUDA-u i oipenCL-u ili cemu vec? Za to treba investirati desetine miliona a toliko kosta par igrica ...
 
Status
Zatvorena za pisanje odgovora.
Nazad
Vrh Dno