Šta je novo?

Fermi is Nvidia's DirectX 11

Status
Zatvorena za pisanje odgovora.
Citiram hardware-infos.com:

Nvidia G300 has overcome Tape Out

As we covered several sources, has NVIDIA's next-generation chip than earlier thought to be successful (first) Tape Out overcome. The so-called-G300 graphics chip - the "T" for Tesla seems like at the G (T) 200 wegzufallen - is located in the A1-Stepping before, if this is the latest stepping, with which he aufläuft final, we can not safely predict .
Basically, a tape-out, that the layout of the graphics has been set. The next step would be to eliminate potential errors that the chip is first run. Again, this phase has Nvidia, according to our information already mastered and is now apparently at the stage for further troubleshooting, the final key to decide whether the graphics chip in this stepping into mass production, or even on another redesign is carried out.
When the 2.4 billion transistors severe G300 aka Geforce GTX 380 the light of day, can not yet say exactly. On the Nvidia's internal roadmap is, however, he continued for the last quarter of this year Properties.
As us a confidential source in this context, announced the sale could start, however, not only a possible, Stepping further delay, but also the 40-nanometer production TSMCs; Nvidia apparently waiting patiently just like AMD to better yields in the fledgling manufacturing process.
In addition to pushing the production of high-speed GDDR5 memory forward on the top model, despite a 512 bit wide memory interface than 1 GHz clock, and more than 256 GB / s memory bandwidth would provide.
 
Charlie-jeve clanke (iako stvarno jesu zabavni) treba uzimati sa dosta rezerve, pa ovo o performansama, (promasenim) arhitekturama, kasnjenjima i slicno ne treba previse komentarisati. Mozda ce biti, mozda nece.

Ono sto je mnogo znacajnije jeste problem sa 40nm yield-om, koji potvrdjuje i TSMC. Problemi samo mogu biti uocljiviji kada je u pitanju veliki cip, sto ce GT300 definitivno biti (2.4 milijarde tranzistora). Performanse na stranu, ako ih ne bude dovoljno ispravnih, cena ce biti veoma visoka. Jos ukoliko ih AMD pristigne po performansama i pritisne cenom (ovo je makar izvesno), makar deo njegovih "prorocanstava" moze se pokazati kao tacan.
 
Nvidia G300 mastered tape-out

As affirmed by several sources, Nvidia's Next Generation chip mastered his tape-out successfully earlier than expected. The so called G300 graphic chip - the "T" for Tesla seems to have dropped out as with the G(T)200 - is in A1 stepping; if this will be the later stepping, which will be the final, cannot be said certainly.

Basically a tape out means that the layout of the chip has been determined. The next step would be to vanish potential errors, so that the chip is run capable for the first time. Nvidia has, according to our information, already mastered this phase, too, and now seems to be in the state of further error searches, that eventually dispose of whether the graphic chip will go on to mass production with the current stepping or receive another re-design.

When the G300 alias GTX 380 with 2.4-billion-transistors will see the light of day, cannot be determined. Though on the intern Nvidia roadmap it is prebooked for the last quarter of this year.

As a intimate source told us within this context, the launch could be delayed not only by another stepping, but also by the 40 nm production TSMC; apparently Nvidia waits just as AMD for better yields in the still unexperienced crafting technique.
Additionally they push the production of GDDR5 memory, which is ought to work on 1 GHz on the top model, despite of the 512 Bit memory interface, and therefore offer more than 256 GB/s memory bandwidth.

;)
 
http://www.brightsideofnews.com/new...dy-taped-out2c-a1-silicon-in-santa-clara.aspx

Sa jedne strane imamo Tea Valica i Fuda koji prave hype o GT300 a sa druge imamo Charlija koji pljuje uzduz i popreko. :D
Kojem li se carstvu prikloniti? Istina je, kao i uvek, negde na sredini.
U Teovoj vesti obratite paznju na sledece.
. If 40nm production yields don't satisfy, expect 32nm bulk silicon parts coming out of a certain Foundry with Global intentions sooner than later.
Potpuno se poklapa sa mojim razmisljanjem jer po GF roadmapu u 4 kvartalu ove godine se spremaju da startuju sa 32nm bulk, tako da treba ocekivati da krajem prvog kvartala 2010 pocnu volume shipment. Ati 100% prelazi u GF tako da ne treba ocekivati nista revolucionarno uradjeno u 40nm TSMC. Mislim da vec sada spremaju novi chip za GF 32nm bulk.
Sa druge strane nVidia je nepoznanica i mislim da mnogo reskiraju sa TSMC 40nm procesom koji je po svim izvorima katastrofa (setite se da je trebalo da stratuje POCETKOM ove godine).
Videcemo, bice jako zanimljivo. :p
 
GT300 kartice nece pre Q1 2010... 40nm derivati G92 se mogu ocekivati u Q3 ove godine.
 
Nvidia G300-samples with 700/1600/1100 MHz

Malo skarabudzenog google prevoda...

Now we can offer the frequency of the present samples, which Nvidia is already very happy to be so final that they also could be used.

Consequently, running samples in the G300-A1-Stepping with 700 MHz chip clock, 1600 MHz shader clock and 1100 MHz memory clock operation - the latter we have already indicated in our previous news on the tape-out to.
Based on the already detected shader units and the width of the memory interface can also be the first accurate, quantitative comparisons up.

As we already reported, the G300 is more than 512 instead of 240 shader units. The rough structure, namely, that it continues to 1D-shader units, the per bar MADDEN and MUL can calculate, will probably remain intact, so that is already broken, that the current samples in a theoretical computing capacity of as much as 2457 Gigaflops come.
Nevertheless, the comparison with the G200, representing GTX 280, limping slightly, because the G300 is no longer a classical SIMD units act, but MIMD-like units, reveals the pure, quantitative compared to 163 percent higher computing power.

Also, the memory bandwidth is now with the knowledge of the memory clock will be taken into consideration. Thus, at 1100 MHz Nvidia to also impressive 281.6 GB / s come. Quantitatively, this corresponds to net against the GTX 280 exactly 100 percent more memory bandwidth.

Theoretical statements about the TMU and ROP performance, despite the fact that a known clock chip has not yet been made, because their number is not yet known or, in the case of the ROPS not even sure if it continues to fixed-function units will act .
 
Poslednja izmena:
28nm sledece godine...?

TSMC on track to enter production at 28nm


Taiwan Semiconductor Manufacturing Company (TSMC) is set to unveil Reference Flow 10.0 for the 28nm process at the upcoming Design Automation Conference (DAC) in July 2009, according to industry sources. The world's top contract chipmaker is expected to deliver its 28nm process as a full node technology on schedule.

TSMC in September 2008 said its 28nm node, which will offer the option of high-k metal gate (HKMG) and silicon oxynitride (SiON) materials, will enter volume production in the first quarter of 2010. The pure-play foundry has successfully ramped in 40nm, supported by its latest production-proven design infrastructure Reference Flow 9.0.

The sources estimate that 40nm will account for 8-10% of TSMC's total revenues by the end of 2009, and the proportion is likely to reach 15-20% in the second quarter of 2010. TSMC's major customers at 40nm include handset chipmakers, GPU vendors and FPGA chip suppliers.

Nvidia reportedly has completed the tape-out of its next-generation (GT300) GPU on TSMC's 40nm, and is also looking to adopt the foundry partner's 28nm technology.

I u vezi 40nm:

Nvidia to increase 40nm orders with TSMC for 2009


Nvidia has moved to scale up its orders for 40nm graphics chips at foundry partner Taiwan Semiconductor Manufacturing Company (TSMC) for 2009, according to sources at graphics card makers. The chip designer expects 40nm to account for 30% of its overall GPU shipments by the end of 2009.

Nvidia has recently completed the tape-out of its next-generation (GT300) GPU on TSMC's 40nm process, indicated the sources.

Nvidia has also expressed interest in becoming an initial client of TSMC's 28nm process, which will start initial production in first-quarter 2010, the sources noted.
 
Taj prelazak sa 40nm na 28nm za 6-9 meseci mi deluje malo previse optimisticki.
 
Kako trenutno stoje stvari, jos ce Charlie biti u pravu... Nvidia won't have DX11 cards until H1 '10 .

Za sada su u pitanju samo glasine (naravno ;)). Ukratko, izvor je neko u samom TSMC-u, a razlog je "re-working a lot of its planned GPUs". Sta to stvarno znaci, u ovom trenutku je tesko reci. Problemi sa taktovima, potrosnjom ili performansama su moguci, ali sam siguran da necemo saznati sta tacno.

Ovo H1 2010 verovatno znaci negde april, mozda i maj, sto je verovatnih 6 meseci kasnjenja u odnosu na konkurenciju.
 
Logican potez. S obzirom na ovu situaciju sa multiplatform igrama nema niti potrebe izbacivati nista brze.

A "re-working" mozda ima veze sa ovim?

GT300 is a big new architectural change
Written by Fuad Abazovic
Monday, 08 June 2009 10:34

Not just GT200 40nm with DX11

Nvidia is trying to change the world once again. The company tends to make some big architectural changes and it did it with G80 / G92 and last time with its GT200 generation.


The next in the line of big conceptual changes and rethinking the graphics world as we know it is GT300. We don’t even know if the codename is right, it is at least to some people, but the chip is real and it should be coming either in late 2009 or early 2010. No-one, even at Nvidia, knows exactly when this should happen, but they all have high hopes it will be this rather than next year.

Since Nvidia is talking and thinking more about the computing market than about graphics dominance, it is easy to believe that the new chip will be fitted for massive parallel computation and it should be much faster in this GPU Cuda, DirectX compute, OpenCL world. It is only natural that performance per watt will increase and that with 40nm you can expect many transistors. DirectX 11 support is naturally there but DX11 will only matter when the big games supporting it comes, and its unlikely that we will get any important titles before deep in the 2010.

Without a doubt one thing is certain. Once it comes, GT300 will be very interesting chip.
 
Sumnjam da je u pitanju to sto si naveo, tu je ipak u pitanju koncepcija GPU-a, tj. strateske odluke koje su odavno donete (12-24 meseca najmanje). Meni licno "re-working" vise zvuci kao ispravke u minut do 12 (ili recimo lepse receno - dodatne optimizacije). Naravno, mozda i gresim :p

Sada je vec maltene izvesno da ce propustiti sezonu kupovine krajem godine (koja ce posebno u ovom vremenu krize biti veoma znacajna, jer ce tada i ljudi koji nisu nista odvajali za hardver tokom godine verovatno "popustiti" i kupiti novu kartu).
 
Logican potez. S obzirom na ovu situaciju sa multiplatform igrama nema niti potrebe izbacivati nista brze.

A "re-working" mozda ima veze sa ovim?

Hoćeš da kažeš da je u interesu Nvidie gušenje PC kao igračke platforme? Inače Win7 donosi DX11, a arhitektura 8xxx/gtx uopšte nije kompatibilna DX11, dok će 4xxx serija moći da radi. Inače je i Nvidia najavila radikalnu promenu tehnologije (i filozofije) čipa, tako da će po svemu sudeći sličniji ati pulenima. Ideja je bila da uz 512bitnu tehnologiju (koja je pod sporom) donesu veliku prevagu u performansama.
 
Poslednja izmena:
Možda je malo rano ali meni GT300 počinje da liči na R600. I od njega se očekivalo da uvede nove standarde među grafuljama a znamo kako je završio.
 
Možda je malo rano ali meni GT300 počinje da liči na R600. I od njega se očekivalo da uvede nove standarde među grafuljama a znamo kako je završio.

Jos samo kad bi se ti novi standardi (dx11,....) mogli odmah primeniti u praksi........
 
Kompatibilan je sa DX 10.1 a ne sa DX10, tj. verovatno će moći da se koriste te kartice ali u DX10 modu (slično kao što je bilo gf4)

http://www.anandtech.com/video/showdoc.aspx?i=3507&p=8

Because DirectX 11 will run on down level hardware and at the release of DX11 we will already have a huge number cards on the market capable of running a subset of DX11 bringing with it a better, more refined, programming language in the new version of HLSL and seamless parallelization optimizations, we will very likely see the first DX11 games only implementing features that can run completely on DX10 hardware.

Of course, at that point developers can be fully confident of exploiting all the aspects of DX10 hardware, which they still aren't completely taking advantage of. Many people still want and need a DX9 path because of Vista's failure, which means DX10 code tends to be more or less an enhanced DX9 path rather than something fundamentally different. So when DirectX 11 finally debuts, we will start to see what developers could really do with DX10.
The complexity of the upgrade, however, is mitigated by the fact that this is nothing like the wholesale changes made in the move from DX9 to DX10: DX11 is really just a superset of DX10 in terms of features.
 
Poslednja izmena:
@ isti tekst na Anandtech
This enables the ability for DX11 to run on down-level hardware (where DX11 specific features are not used), which when combined with the enhancements to HLSL with OOP and dynamic shader linking mean that developers should really have fewer qualms about moving from DX10 to DX11 than we saw with the transition from DX9.
Znači, slično kao što je i DX7 grafička kao što je gf4mx mogla da se koristi u DX8.1 igrama. Ta opcija je ostavljena zbog tranzicije , ta tzv. kompatibilnost omogućuje da se jednostavno isključe DX11 specifičnosti.
 
E ovo stvarno smara sa dx-om,sto lepo ne izbace nesto sto to koristi pa tek onda da kazu" e vidite to koristi to i to i izgleda tako i tako zbog toga i toga" , a ne neke dosadne brojke.
 
Nvidia GT300 said to come with 225 watt TDP
Initially the GT300 was said to need up to 300 watt, but it seems like Nvidia optimized the chip heavily.
Up to now the power consumption of Nvidia's DirectX 11 GPU, the GT300 with its 2.4 billion transistors and 40 nm structure, was supposed to reach 300 watt. But Nvidia is said to have optimized the chip to lower the consumption drastically. Appropriate GTX 380 graphics card will need 225 watt and two 6-pin power connectors only says a report at brightsideofnews.com. Actually this is still a lot of energy, but given the expected 512 Shader Units and clock speeds of 700 MHz (GPU), 1,600 MHz (Shader) and 1,100 MHz (GDDR5 VRAM) quite impressive.
http://www.pcgameshardware.com/aid,686772/Nvidia-GT300-said-to-come-with-225-watt-TDP/News/

225w :zgran:

kome ovo treba realno? stiže nova kiks furuna ala 2900xt:exploder:
 
^^Furuna?

HD4890 ima TDP oko 190-200W.

S obzirom da se radi o single cipu nove generacije kome ce konkurencija ocigledno opet da parira sa dva (ili vise lol) cipova na jednom PCB, to je sasvim solidno, plus sto kazu da ce traziti 2 6-pinska konektora - a to je zapravo odlicno za high-end.
Ili si mozda poredio sa HD4770? :d

No, nebitno, sve dok ti mislis da je kiks furuna:D
 
Poslednja izmena:
Zanimljivo je da su neki vec otpisali ovaj chip,proglasili ga kiksom,furunom,novim R600 itd. Pa onda da ce DX11 igre da rade na Ati 4xxx seriji a nece na GT200 seriji kartica. Veoma zanimljivo. :D
 
Znači, slično kao što je i DX7 grafička kao što je gf4mx mogla da se koristi u DX8.1 igrama. Ta opcija je ostavljena zbog tranzicije , ta tzv. kompatibilnost omogućuje da se jednostavno isključe DX11 specifičnosti.

http://forum.beyond3d.com/showthread.php?p=1195567#post1195567
MVP
Each Direct3D version has its own dedicated runtime that can talk to one or more different driver interfaces. On Vista the runtimes for anything up to Direct3D 9 talk to the Direct3D 9 driver interface. The Direct3D 10 runtime make use of the Direct3D 10 driver interface. 10.1, the runtime that is includes in SP1, works with a Direct3D 10 and 10.1 driver interface. Finally the upcoming Direct3D 11 runtime will work with the 10, 10.1 and 11 driver interfaces

So once DX11 hits, then WinOS would have the following:

Driver interfaces
  • Direct3D 9 driver interface
  • Direct3D 10 driver interface
  • Direct3D 10.1 driver interface
  • Direct3D 11 driver interface

Direct3D Runtimes
  • Direct3D n - 9 runtime (talks to Direct3D 9 driver interface)
  • Direct3D 10 runtime (talks to Direct3D 10 driver interface)
  • Direct3D 10.1 runtime (talks to Direct3D 10, 10.1 driver interfaces)
  • Direct3D 11 runtime (talks to Direct3D 10, 10.1, 11 driver interfaces)

Tako da, osim u slucaju da izadje DX11 igra koja koristi teselaciju pre nego sto nVidia izbaci svoj DX11 GPU, nijedan od proizvodjaca nema prednost sto se tice kompatibilnosti kao sto si ti pokusao da kazes za 10.1.
 
Poslednja izmena:
Empirijski utvrđena potrošnja:

4890 ~ 120W
4870 ~ 130W
4870X2 ~ 260W
4850X2 ~ 225W
4850 ~ 110W
GTX285 ~ 150W
GTX280 ~ 180W
GTX260 65nm ~ 135W
GTX260 55nm ~ 105W

Deklarisani TDP je veći od izmerenih vrednosti.
 
Poslednja izmena:
Ljudi, kljucna rijec je multiplatform! Kome treba dx11? Mozda po tom pitanju GSC ili Crytek nesto odrade u blizoj buducnosti? Ima li sta u najavi? Mozda sam nesto propustio?

Hoćeš da kažeš da je u interesu Nvidie gušenje PC kao igračke platforme? Inače Win7 donosi DX11, a arhitektura 8xxx/gtx uopšte nije kompatibilna DX11, dok će 4xxx serija moći da radi. Inače je i Nvidia najavila radikalnu promenu tehnologije (i filozofije) čipa, tako da će po svemu sudeći sličniji ati pulenima. Ideja je bila da uz 512bitnu tehnologiju (koja je pod sporom) donesu veliku prevagu u performansama.
Nema logike u tome sto kazes! Nvidia blisko suradjuje sa svim game developerima. I cini mi se da si puno toga propustio? Nvidia i Ati filozofije su totalno razlicite.
 
Da li ce biti kiks, promasaj, furuna ili sta vec, znacemo tek kada se pojavi. Za sada su sve nagadjanja.

Ipak, ako se pojavi negde u aprilu/maju (Q2 2010) definitivno nece imati efekat kao sto bi imao da se pojavi kada je NVidia i planirala da se to desi, tj. Q4 2009. AMD ce u tom slucaju 6 meseci imati brzinsku krunu, a nije nemoguce da vec u junu/julu (ili nesto kasnije) imaju naslednika (8-9 meseci im je trebalo i sa RV670 na RV770).

GeForce je i dalje veoma jak brend, ali ce im trebati dosta para u marketingu da nekoga ko iole poznaje hardver ubede da kupe dve godine staru tehnologiju umesto necega sto podrzava Dx11.
 
Onoga ko se razumije u ovo sto pricamo nema potrebe ubjedjivati. A onog ko se ne razumije pogotovo. Ljudima je vazno da sve igre pristojno rade.
 
Tako da, osim u slucaju da izadje DX11 igra koja koristi teselaciju pre nego sto nVidia izbaci svoj DX11 GPU, nijedan od proizvodjaca nema prednost sto se tice kompatibilnosti kao sto si ti pokusao da kazes za 10.1.

Znaš šta, ispade da ja napamet pričam :trust: Developing DX11 je gotov, to nije tajna, kao ni kompatibilnost, samo je stvar da se Ati ranije prilagođavao i razvijao GPU u tom pravcu.
http://www.earthtimes.org/articles/...directxreg-11-graphics-processor,847393.shtml
http://www.anandtech.com/video/showdoc.aspx?i=3573

Žao mi je što nemam vremena da nađem i jedan tekst iz "The Inquirer", koji baš govori o ovoj temi.
 
Status
Zatvorena za pisanje odgovora.
Vrh Dno