Šta je novo?

Fermi is Nvidia's DirectX 11

Status
Zatvorena za pisanje odgovora.
Ajde da se vratimo na temu...
GF100 dizajn je uradjen tako da GPU drzi sve sto treba za tekuci posao u L1/L2 cache-u na samom GPU i nema problema sa latencijom. Npr. prilikom izvodjenja operacija kao:
- vertex fetch (iz video memorije u L2)
- vertex shader.. koristi vrednost vertexa iz L2.. rezultat opet ide u L2
- tesselator.. cita iz L2 i rezultat opet ide u L2
- rasterizer cita vertexe iz L2
- pixel shader cita iz L2 upisjue u L2 i DRAM
- ROP-ovi mogu da citaju i pisu po L2 i DRAM-u.
GF100 ima 768KB L2 cache-a koji su dinamicki optereceni (dynamic load balance) za maksimalne performanse.

G200 je koristio cache samo za texture i to kao read-only. GF100 cache je dostupan za read-write iz delova (verte, pixel, geometry shaderi, tesselatori, ROP, texture fetch, ...)

Sa druge strane, ATI 5870 posle npr. geometry shadera salje rezultat u video memoriju pa odatle ponovo vuce podatke za rasterizaciju.
 
Jos neke zanimljive stvari:

That leaves us on a final note: clocks. The core clock has been virtually done away with on GF100, as almost every unit now operates at or on a fraction of the shader clock. Only the ROPs and L2 cache operate on a different clock, which is best described as what’s left of the core clock. The shader clock now drives the majority of the chip, including the shaders, the texture units, and the new PolyMorph and Raster Engines. Specifically, the texture units, PolyMorph Engine, and Raster Engine all run at 1/2 shader clock (which NVIDIA is tentatively calling the "GPC Clock"), while the L1 cache and the shaders themselves run at the full shader clock. Don’t be surprised if GF100 overclocking is different from GT200 overclocking as a result.

Once the polymorph engines have finished their work, the resulting data are forwarded the GF100's four raster engines. Optimally, each one of those engines can process a single triangle per clock cycle. The GF100 can thus claim a peak theoretical throughput rate of four polygons per cycle, although Alben called that "the impossible-to-achieve rate," since other factors will limit throughput in practice. Nvidia tells us that in directed tests, GF100 has averaged as many as 3.2 triangles per clock, which is still quite formidable.

Sharp-eyed readers may recall that AMD claimed it had dual rasterizers upon the launch of the Cypress GPU in the Radeon HD 5870. Based on that, we expected Cypress to be able to exceed the one polygon per cycle limit, but its official specifications instead cite a peak rate of 850 million triangles per second—one per cycle at its default 850MHz clock speed. We circled back with AMD to better understand the situation, and it's a little more complex than was originally presented. What Cypress has is dual scan converters, but it doesn't have the setup or primitive interpolation rates to support more than one triangle per second of throughput. As I understand it, the second scan converter is an optimization that allows the GPU to push through more pixels, in cases where the polygons are large enough. The GF100's approach is quite different and really focused on increasing geometric complexity.

What NVIDIA has done is made the GF100 more parallel than any GPU to date, obviously making it even less serialized. NVIDIA claims 8X the geometry performance of GT 200. This re-ordering of the graphics pipeline caused an increase of 10% of the die size and from our understanding of the issue, is the reason GF100 is "late."

This re-design may pay off though if the slide above about the Unigine Benchmark is to be believed. NVIDIA is claiming much higher performance in this benchmark with Tessellation compared to the Radeon HD 5870, and we all know that benchmark was written specifically on Radeon HD 5000 series hardware. And while it is only a benchmark, the Unigine application is the best we have seen for leveraging DX11 tessellation showing off huge image quality impacts. If GF100 is beating the Radeon HD 5870 that much in a benchmark that was written for the Radeon HD 5870 in the first place, that just spells "awesome" for the kind of geometry performance potentially here.
 
Znaci, Fermi samo sto nije izasao, osim ako neko nije odradio napamet benchmarke za kartu.
Znaju li se cene za ceo segment kartica bazirane na novom chipu.
AL ce jeftino da se dolazi do druge 5850 kartice u masini.
 

Next generation games will demand much more than just fast rendering of triangles and pixels—they will require the GPU to compute physics, simulate artificial intelligence, and render advanced cinematic effects. These demands are all met by the next generation NVIDIA® CUDA™ architecture in GF100 GPUs.

Ista prica ko Radeon 5000 i DX11 kampanja :d. O kojima to igrama govore ovi iz PR nvidia-je? Kada ce da se pojave? Verovatno kada izadje nova generacija konzola :trust:

Rece Cevet da nema nekog boljitka sto se tice igra dok se to ne desi jer niko nije lud da pravi igre samo za PC.
 
Samo sto je uz 58xx isao i dirt2:D, a uz gtx3xx ce verovatno da ide alien vs. predator, i eto ti dve igre...
 
Fermi je stigao... aprilililili :D Ubrzo ce maj... :d
Ozbiljno, mogli bi da daju neki info, cudi me da je hype na cekanju :S:
 
Cek da se slegne prasina oko iPad-a. :)
 
Novi gf100 modeli ce se zvati gtx480 i gtx470.
 
Novi gf100 modeli ce se zvati gtx480 i gtx470.

Cudno... valjda hoce da posalju poruku da je nova generacija dva puta bolja od prethodne ;)

Druga opcija je da ce biti rebrandiranja serije 200 u 300, ali sumnjam da ce to biti slucaj, ni do sada im se nije previse isplatilo da ih proizvode.
 
Mislim da je gtx3xx serija rezervisana za mobile verzije.
 
Kakvi pajseri pa trebali su onda da daju gtx580 itd. kad već hoće da se utrkuju sa ATi-jem.
 
@prostreet:
xaxa.. bojim se da ce biti obrnuto.
 
^ Uzimajuci u obzir koliko su namerili da ih cene, nama, prosecnoj raji iz Srbije, to malo sta znaci. ;) GTX280 i kasnije GTX285 su, svojevremeno, bili najbrza single chip resenja, pa koliko ljudi je kupilo njih, a koliko 4870?... :)
 
Cenim da ce cene biti bar 50% skuplje u odnosu na 5870.
 
U IT svetu ako je nesto 50-60% brze, a samo 50% skuplje, onda je to ekstra povoljno:)
 
Samo shto ce jako tesko da bude 50% brze. ;) Iskreno,ja bi voleo,ali treba sacekati i videti...
 
Realno.. za danasnje naslove ti je sasvim dovoljan 4850 ili GTX260. Ja sam samo rekao da ce gf100 biti znacajno brzi od 5870.
 
Samo shto ce jako tesko da bude 50% brze. ;) Iskreno,ja bi voleo,ali treba sacekati i videti...

Ja ne pamtim da je IT industrija u skorije vreme zabelezila takav rast performansi, osim u specijalno pripremljenim benchmarcima
 
Realno.. za danasnje naslove ti je sasvim dovoljan 4850 ili GTX260.

Sad htedoh da napisem nesto u ovom fazonu, ali ti me pretece sa ovom recenicom. :) Situacija je sledeca: game developeri nemaju ama bas nikakvu nameru da odustanu od multiplatform trenda sve do izlaska iduce generacije konzola, sto znaci da jos najmanje dve godine mozemo da ocekujemo uglavnom U3 engine based naslove, koji lete na 9600 GT-u. Pitam se samo gde se ovde pronalazi nova generacija Nvidijinih kartica, i sta to one mogu da ponude kupcu, ne bi li njihova enormna cena bila opravdana?. DX11?!.... :d

Tragikomicna je cinjenica da ce se za vecinu Fermi testova (kad se konacno pojavi), po ko zna koji put vrteti benchovi 3 godine starog Crysisa... :d
 
Pa ne mora da bude 50% brzi. Intelovi extreme procesori koji su 10% brzi od nekih iz sredine ponude, skuplji su 400%, pa opet niko ne spori to pravo Intelu, i sasvim im lepo ide biznis.
@prostreet
imam, ali mi tako nesto sada ne treba. Kao sto rece yooyo kartica od 150 evra mi zavrsava sav posao. Medjutim ako u medjuvremenu izbace, recimo finalnu verziju vray gpu renderera ili izbace premiere ili after effects gde gpu radi full real time processing uzecu je bez razmisljanja. Ako bi se ovo dobro skaliralo na dva gpu-a, onda bi uzeo i dve gtx480-ke:D
 
Poslednja izmena:
@Vuk_Gamer: po tvojoj logici, ni Ati nije trebao da izbaci novu seriju grafičkih (s obzirom na drivere bolje i da nije), pošto 9600 tera sve nove igre, a to i nije baš tačno.
 
Ako poredimo cenu hrane u nekim zemljama (meni trenutno pada na pamet Moskva, Rusija :d), gde za jednu nedelju potroshish 200-300 evra samo za hranu (tj. nekoliko kilograma mesa, povrca, voca itd.), ne vidim problem da se kupi kartica od 400 evra.
 
Pa ne mora da bude 50% brzi. Intelovi extreme procesori koji su 10% brzi od nekih iz sredine ponude, skuplji su 400%, pa opet niko ne spori to pravo Intelu, i sasvim im lepo ide biznis.
kada bi Intel poslovanje bazirao samo na prodaji EE procesora mogao bi odmah da zakatanci firmu. pare donosi mainstream i tu su sve oci uprte. kartu od 500e kupi tek poneko...
 
Ovde je bilo reci o tome da se neki "sokiraju" sto ce morati da se da 50% vise novca za 50% vise snage
 
Status
Zatvorena za pisanje odgovora.
Vrh Dno