Šta je novo?

Does Nvidia have a future?

pixell

Slavan
Učlanjen(a)
01.02.2006
Poruke
2,436
Poena
380
Po cenu da budem javno kamenovan od nekih članova foruma zbog naziva, postavljam temu sa linkom ka bit-tech.net-u koji je počeo da pravi seriju članaka o najuticajnijim i ključnim kompanijama u novijoj tehnološkoj industriji.

Članak pravi osvrt na zadnjih par godina uspona i padova, poslovanja i razvoja nVidije kao i uvid u nove tehnologije kao što su GPU computing, Tegru, TWIMTPB, Ion...



Vredi pročitati...
 
Debilan naziv teme, ne tvoje, deluje kao da nV propada.:p Ali svakako ok tekst.
 
Ajde da mi neko objasni da li je DX11 Windows7 only feature? Mislim da nije, tako da zaista ne vidim kakva je to prednost AMD-a ako izbaci DX11 kartu u isto vreme kad izadje Windows7?
DX11 igre ne postoje, bice ih uskoro ali ce NVidia do tada da izbaci novi GPU.

Dalje, NVidia "ratuje" na vise frontova... desktop GPU, notebook GPU, handheld chipset, u krckanju brojeva (CUDA i OpenCL) je lider na trzistu, zatim podrska developerima, itd... Na pojedinim mestima nema konkurenciju, pa je u stanju da stvori standrd, koji posle svi hoce da menjaju ali nece da usvoje ono sto je NVidia vec napravila.

Evo npr.. DX computing. To ce se koristiti samo za igre. Niko ozbiljan nece pisati aplikacije za krckanje brojeva u tome jer je cena prevelika... npr.. Linux + OpenCL/CUDA + NV hardware je daleko jeftinije od Windows + OpenCL/CUDA/DXcomputing + NV hardware. Zamislite da neko pravi cluster za krckanje brojeva... koliko para treba da istrese za windows licence? Za te pare moze da kupi jos 30% novih nodova i potera Linux na njima.

I npr.. ako ubace ARM jezgra u novi GT300 umesto x86 i sa bilo kojim gcc kompajlerom mozes napraviti program koji se u potpunosti vrti na GPU bez koriscenja CPU-a.

Nedostatak x86 licence nije neka velika mana. Velika kolicina postojeceg x86 code-a se nemoze paralelizovati automatski tako sto ces ga prekompajlirati na da radi na N procesora, ili pokrenuti vec kompajliranu aplikaciju da radi na N procesora, vec se neki kljucni delovi moraju ponovo uraditi da bi iskoristili prednosti masovne paralelene arhitekture. U tom slucaju x86 deo programa je samo "dirigent" koji koristi usluge mocnijeg GPU koprocesora. ARM jezgro vec imaju u Tegri.
 
Za ostatak se slazem. 'Teoretski' mogu nesto da promene ali nemoj zaboraviti da Intel ima ogroman kapital i moze da 'zataska' i nesto sto bolje odradjuje posao :)

Sto se tice tvog pitanja za dx11, u prednosti je sto ces imati ljude koji ce odmah hteti da se 'obezbede za buducnost'. Ja da sam na njihovom mestu, a nvidia ne stigne izbaciti makar paper-launch, ja bih podigao cene kartica za 50$ (najavljene su 299$ kao i za 4870). Znas i sam da su ljudi koji razmisljaju na 'pravi nacin' (tj ogrizli su ovom svetu it-a) procentualno cine jako mali broj. Svestan si koliko je nvidia prodala gtx280 kartica po 650$? Da nisu imali tu prednost i da je amd izbacio karte pre njih, osetili bi jos veci udar. Ovako su ako nista kupili vreme da uloze u 55nm proces i smanje troskove.
 
Eh eto :p Kada imas kapital, lako je...
Upravo sto je yooyo spomenuo, nvidia 'ratuje' na dosta segmenata ali ja mislim da su malo 'uranili' sa tim. Bolje manje segmenata a da dominiraju nego njih vise, a da eto imaju nesto. Mada, mozda nije moglo drugacije. Ko sam ja da sudim. Ljudi verovatno placaju gomilu analiticara i savetodavaca :d
 
pa ***.. imali su chpsete pa su odustali od njih jer nemogu na trziste pored Intela i AMD-a.
U Desktop-GPU rival im je AMD i VIA
u Notebook-GPU segmentu - Intel, AMD
u Notebook chipset - Intel i mozda AMD
u GPGPU - niko
u Mobile segmentu - Qualcomm, TI, Samsung i mozda jos poneko.
 
^
interesantno kako nisi spomenuo ARM - koji i nije rival vec saveznik :) extra je ta poslovna polititka da rade samo IP i to non-x86 IP :)

Eh eto :p Kada imas kapital, lako je...
Upravo sto je yooyo spomenuo, nvidia 'ratuje' na dosta segmenata ali ja mislim da su malo 'uranili' sa tim.
mcrazy, vremena se menjaju; "Diana was right: world changing, music changing, even drugs is changing..." :d

vremena su se uvek menjala, uvek su postojale alternative za x86+M$ mainstream (malo je sve zatajilo za vreme apsolutne dominacije Wintela - od polovina 90tih do neke 2005...) ali bitno je da postoje alternative, makar i propale ;) - šteta je samo što će se sva ta dobra rešanje primeniti sa _5 godina zakašnjenja_ zbog "ekonomskih" faktora... :d
 
Poslednja izmena:
Hvala na pitanju, ima buducnost... verujte mi na rec :)
 
Poslednja izmena:
Mada, skinuo si nV avatar, nešto tu meni ne štima...
 
Mada, skinuo si nV avatar, nešto tu meni ne štima...

Jesam, ovaj srecni wombat mi je mnogo cakan pa nisam mogo da odolim a da ga ne stavim... Evo sad bacih pogled na akcije i nvidija je skocila i jos malo pa prelazi na 8Milijardi market kapital.. akcije su poverenje ljudi u firmu, pa ako akcije skacu to je posledica poverenja... ne skacu akcije firmi koja nema buducnost..
 
Evo sta Huang kaze na to :)

2jcy7q1.jpg


GPU performance will increase up to 570x in the next six years.

TG Daily is reporting that Nvidia CEO Jen-Hsun Huang made an astonishing prediction, claiming that GPU computing will dramatically increase over the next six years, a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe.

According to Huang, who made his revelation at the Hot Chips symposium in Stanford University, the advancement would open the door to advanced forms of augmented reality and the development of real-time universal language translation devices. Wait? A universal translator? Sounds like Huang is talking Star Trek!

Huang also said that such advancements in GPU computation would also boost a number of applications such as interactive ray tracing, CGI simulations, energy exploration, and other "real-world" applications.

Da nebi bilo zabune, misli se na GPU computing, a ne na performanse u igrama :)
 
Da nebi bilo zabune, misli se na GPU computing, a ne na performanse u igrama :)

Kako je dosao do 570x? Po trenutnom tempu, faktor je otprilike x2 svake godine.

Takodje, na sta misli kada predvidja da ce se CPU performanse uvecati samo za 3x... to je 6 godina, makar dve nove arhitekture i ko zna koliko jezgara.

Mozda neko matematicki potkovaniji moze da objasni ove projekcije, ja ih ocigledno ne shvatam na pravi nacin.
 
GPUs will have 20 teraflops for GPUs by 2015
The arrival of Bill Dally at the head of R&D for NVIDIA is then a clear indication of the company’s objectives: evolving from a 3D specialist towards a specialist in massively parallel processing with 3D “simply” one of the uses for this area. This transition could allow NVIDIA to place itself at the heart of future architectures and no longer as a supplier of an optional extra. More than simply a desire to change things in their favour, this is in fact more of a recognition by NVIDIA of an inevitable development. Bill Dally has then been appointed to ensure that NVIDIA has the right technology over the next 5 to 10 years.

We didn’t learn anything specific on NVIDIA’s plans or forthcoming products in our interview. Bill Dally has only just joined the company and only has partial knowledge of the current products and strategies and was happy just to repeat the offical company line. So there was nothing really new to learn there.

We were however able to discuss other matters. Billy Dally is reckoning on a processing power of 20 teraflops for GPUs by 2015, which corresponds to a doubling of processing power every two years. He also said that GPUs should also evolve to become more effective in the execution of less multi-threaded tasks, that’s to say exploiting parallelism at instruction level on top of data/thread level to which current GPUs are limited, in contrast to Intel’s forthcoming Larrabee.

When questionned on Intel’s competitive advantage, seeing as the CPU number 1 owns its own factories and can therefore potentially put new fab technologies into practice more quickly, Bill Dally said that this was a detail and that what was important was to develop the most effective architecture. Still on this subject and in respect of being able to use GlobalFoundaries instead of TSMC, Bill Dally said that he didn’t think SOI technology was worth pursuing as it is too expensive given the fact that it only brings a small advantage. As far as anything else goes anything is possible of course but both NVIDIA and AMD want to keep TSMC sweet for the time being as TSMC is currently their only partner. They therefore don’t have much to say at the moment on the subject of GlobalFoundaries.

Another interesting point made by Bill Dally with regard to projects at NVIDIA concerned making C for CUDA available on other platforms. NVIDIA however does not envisage supporting Intel’s Ct language, Bill Dally affirming that C for CUDA is currently the reference in the domain. It would seem that NVIDIA’s strategy will be to open C for CUDA to other architectures so as to maintain this position. Controlling, at least in part, the software side of forthcoming architectures is of course likely to help NVIDIA to position itself better and this makes it more advantageous in the long term to open up C for CUDA rather than to limit it to its own products to protect its smaller current market.



Give us power tools
Performance scaling of single-thread processors stopped in 2002, Dally said, following a period when the industry derived a performance increase of 52 percent per year for more than 20 years. But throughput-optimized processors like graphics processing units (GPUs) are still improving by greater than 70 percent per year, he said.

"Now that we are no longer scaling single-thread processors, you have no alternative," Dally said.

Throughput processors have hundreds of cores today and will have thousands of cores by 2015, Dally said. By then, Nvidia will have GPUs implemented on 11 nm process technology that feature roughly 5,000 cores and 20 teraflops of performance, Dally predicted.

The requirements of EDA tools exceed the performance increases enabled by continued compliance with Moore's Law, Dally said, and won't be sustained by improvements in single-thread processors.



Q&A: Mike Acton
Do you think that old programming practices have caused people to fall into bad habits that make working on modern architectures harder?

It's interesting, because I think that probably the oldest programming methods are the most relevant today. It's the habits over the last five or eight years that are struggling, and it's interestingly the people that are more recently out of school that are going to have the most trouble, because the education system really hasn't caught up to how the real world is, how hardware is changing and how development is changing.

The kinds of things that they're teaching specifically about software as it's own platform is teaching people to abstract things and make them more generic - treating software as a platform, whereas hardware is the real platform - but performance, and the low-level aspects of hardware, aren't part of the education system. People come in with a wrong-headed view on how to develop software. And that's the reason why Office 2007 locks up my machine for two minutes when I get an e-mail. :d

So you think universities should be putting more emphasis on parallel and heterogeneous processing?

I think we're finding that in the past couple of years universities have started to address parallel processing - MIT and Georgia Tech both have good programmes - so we're starting to see trends there on that. As far as low-level programming, yeah, I'd like to see that covered - you have a lot of people leaving school now who not only have never written any assembly but don't even understand how it works in general.

They use a high-level or compiled language, and it’s like a magic box to them. But it's something that as a professional programmer you should know - it should be part of the job description - and I think fundamentally what's missing is an understanding of hardware and how it works and how it fits into the programming ecosystem. So maybe what they should be blending is an electronic engineering degree along with a computer science course.

Do you think that getting used to this heterogeneous processing model is just a learning step, something that developers will just have to overcome?

Yeah, multi-core isn't tomorrow's tech - it's here now. You can kind of get away with it on other contemporary platforms, but next-gen, and the gen after that - there's no option. These are skills and lessons that you have to learn in order to survive in this industry. Another example is to take the generation jump between the SNES/Mega Drive and PlayStation/Saturn - there were a huge amount of developers that didn't survive that transition because they had to learn 3D, and they said 'well, 3D is so much harder than 2D'. Building models, dealing with a huge number of artists - this was much harder than just having sprites and just blitting them to the screen. People didn't survive, and they bitched, but you have to do it because the world changes.
 
Poslednja izmena:
Kako je dosao do 570x? Po trenutnom tempu, faktor je otprilike x2 svake godine.

Takodje, na sta misli kada predvidja da ce se CPU performanse uvecati samo za 3x... to je 6 godina, makar dve nove arhitekture i ko zna koliko jezgara.

Mozda neko matematicki potkovaniji moze da objasni ove projekcije, ja ih ocigledno ne shvatam na pravi nacin.

Iza njega pise za CPU 3x zbog ubrzanja od 1.2 svake godine tokom 6 godina.
Za GPU, 570 je od 50x1.5^6, sto verovatno znaci da ce broj procesorskih jedinica unutar gpu-a da se poveca 50x za 6 godina i da ce svake godina sama procesorska jedinica biti mocnija 1.5 puta. Mozda gresim doduse. 50x veci broj shejdera(ili sta je vec u gpu-u hardverska jedinica procesiranja) deluje kao naucna fantastika - po toj logici Nvidia ocekuje da se svake godine duplira broj shejdera.
A mozda kao napredak racunaju i kad ubacis u masinu Quad SLI :D

Tek sad vidim u prethodnom postu da se oni stvarno nadaju da ce uvecati broj jezgara unutar gpua za vise desetina, i da ce koristiti 11nm proces 2015.
 
Poslednja izmena:
extra info! 10x colt!
 
Osiguranje nece NVIDIA-i da isplati kintu za pokrice nagodbu koju je napravila sa proizvojacima notebookova na konto prituzbi o pokvarenim chipsetovima:

http://www.tgdaily.com/content/view/42472/118/

zipa, kad je NVIDIA 'ladno mislila da ce im osiguravajuca kuca na lepe oci pokriti one silne milione koje je isplatila notebook proizvodjacima!

Bice zayebano jos ako se ispostavi da je NVIDIA napravila "privredni prekrsaj"?
 
Intervju sa Jensenom, prepun neverovatnih odgovora!

http://www.chw.net/2009/10/chw-entrevista-a-jen-hsun-huang/2/#

Samo par:

CHW: What will drive NVIDIA’s growth in the future? Hardware? Software? GeForce? Tegra? Tesla? Etc…


JHH: Well, NVIDIA is a software company, this will be what will drive our growth

CHW: How do you make money then?

JHH: By selling hardware

CHW: How do you plan to always be the best? How do you achieve this?

JHH: The best way to do it is to have no backup plan

CHW: Are Gamers the most profitable market for NVIDIA?

JHH: I would say probably not,

CHW: What do you think about the Lucid Hydra 200 Chip?

JHH: I don’t know, it sounds like terrible idea.


Bice da je ovaj covek sam pojeo onaj can of whoopass pa prica ove stvari...

ukratno: "Mi smo softverska kompanija, koja zivi od prodaje hardvera i nema plan "b", a gejmeri nam nisu primarni fokus i da, mrzimo Lucis Hydru" - JHH
 
Poslednja izmena:
The following text is the transcription of a 30 min meeting with JHH.

Nedjo, a da se opet nisi upecao na los prevod? :d

Ceo tekst, deluje cudno kad se cita, na momente se bas i ne razume.

S druge strane, ako je smisao ok, ne vidim sta ima toliko cudno u njegovim odgovorima. Mislim ti si lepo odseko kako tebi odgovara, i onda dajes komentare ;)

Sta je cudno sto sebe nazivaju softverskom kompanijom? Pa i Cisco sebe tako naziva.
A nVidia i ne proizvodi nikakav hardware, vec projektuje GPU-ove, pise drajvere,aplikacije, CUDA-u...
 
Poslednja izmena:
Al nam je zato Nedjo preveo "ukratko" :d
 
JPR: GPU shipments grew 21.2% last quarter

This is the highest quarter-to-quarter to growth in nine years!

Intel and AMD look like the winners there. Compared to the previous quarter, Intel enjoyed 25.2% growth in shipments, AMD saw 30.2%, and Nvidia just 3.3%. Compared to the same quarter a year before, JPR registered growth of 14.6% for Intel, 3.1% for AMD, and -4% for Nvidia (yep, that's a minus sign).

Q3 '09 sees Nvidia's market share finally succumb in alarming ways. Nvidia's market share dropped a full 4.3% over just one quarter, from 29.2% to 24.9%. Intel and AMD were both winners, increasing their market share to 52.7% and 19.8% respectively.

Fermi is an amazing piece of technology—period. The Nvidia haters (they are legion I’m told) have been quick to point out that it is overloaded with silicon not needed for gaming, and is big, and won’t be available in time for this year’s holiday madness. And actually much of that is true—but wait—there’s more.

Back in the days I built hot rods we would buy the biggest engine car we could afford—and then strip everything off it we possibly could to focus its horsepower on the issue at hand—being first off the line and to the finish line. We didn’t start with a tiny engine and try to build it up. So off came the air conditioning, out went the power steering, the bumpers were pulled, the back seat thrown out, and we’d even pull the gas tank and replace it with a can—this car only had to go 1,350 feet.

Fermi is the same kind of thing. If there’s stuff in it that’s not needed for gaming don’t you think Nvidia is smart enough to know how to make a stripped down derivative? But before you can start stripping down, you gotta have something big and powerful to work with.

Getting out of the game business—I don’t think so. Getting out of the lime light since they won’t have a new product for the holiday—you betcha. Don’t read too much into those tea leaves.

Izvori: techreport, vr-zone, jonpeddie.

Očekivani rezultati. Pre ili kasnije je moralo krenuti na bolje sa prodajom. Jedini problem koji je sada evidentan je da nVidia ubira plodove svoje nemarnosti (nemoći) da napravi pristojan i u isto vreme isplativ GPU prethodna tri kvartala. Sada se vidi koliko derivat GT200 zapravo fali da popravi prodaju. Gledajući ovaj kvartal u kome smo, može da bude samo gore po njih. Iskreno se nadam da je taj Fermi vredan ovolikog rizika. U suprotnom ih očekuje detaljno restruktuiranje i preorijentacija na druge grane biznisa (ovde mislim prvenstveno na GPGPU i low power GPU segment).
 
Poslednja izmena:
njihov kraj je pocheo onda kad je AMD kupio ATI, tj. kad su ostali sami ko siroche na trzishtu.. bez podrshke velikih igracha ne pishe im se dobro jer nemaju licencu da prave procesor, a bogami uskraceni su i za chipsetove. Mislim nece oni propasti chupaju se na GPU computing i izradu logike za Apple, samo je pitanje do kada.. imam utisak da se GPGPU bash toliko i forsira jer kao shto rekoh opet nemaju licence da prave procesore... zao mi je, najvishe sam imao u zivotu NV karti ali imam utisak da lagano tonu bez izlaza...
 
Ako je GT200 bio prevelik/neisplativ iz razloga što je previše naginjao GPGPU, onda će to s Fermijem biti izraženo još više.

Priča se jeftinim derivatima je isto na klimavim nogama.
Primjer AMD HD5000 serije: rv870 (HD5800) ima podršku za double precision floating point, a rv840 (HD5700) ju nema. Razlog: smatrali su da taj fičer nije potreban u toj klasi, odnosno ne donosi neki benifit vezano za performanse u igrama, a dodatno komplicira čip.

Jedino ako NV ne misli napraviti derivat bez podrške za ECC u mem. kontroleru i bez DPFP, ali onda žrtvuju GPGPU :). Nekako mi se čini da će baš tako biti.
 
U godinama kada se kompanije utrkuju u ulagivanju "gejmerima" pred launch svojih novih proizvoda i prodaju fazone da se "brinu" o istima (a ne o njihovim parama, jelte), sta imamo?
Stari 8800GT tera ovosezonski "hit" OFP Dragon Rising u 1080p sa 2AA i 4AF, a vizuelno najnaprednija igra, koja maltene i jedina trazi jaci hw - je stara 2 godine.
To se desava, kad se ide u kolce sa Microsoftom. Promovisi njihov OS i novi DirectX kako bi prodao svoj hw. I ljudi to kupuju i sa podsmehom gledaju sta radi "konkurencija" (ne konzolna, vec ona sto radi PC hw takodje lol) i tvrde da doticna - propada. Jer nema DX11 kartu na trzistu, OMGZOMG.
Umesto da se forsira nesto drugo, DX11 ispada "glavni" i najvazniji feature koji se MORA imati. M$ happy, kupci kartica happy...A IGRE?? Malko kaskaju vec par godina. Graficki. A po gameplay-u stagniraju.

nV jedina ne proizvodi CPU-ove u ovoj kombinaciji i MORA da forsira GPGPU, kao i low power GPU za prenosne uredjaje, smartfonove i sl. Tako da tu nema nista cudno. To sto ih sad pojedini predstavljaju kao da "ne brinu" o gejmerima i prozivaju ih zbog toga je smehotresna stvar... a vamo se ulizivati M$-u i reklamirati njihov OS i DX, kompaniju koja je najodgovornija za (lose) stanje i stagnaciju video igara (i na PC i na konzolama).

Sto se mene tice, kupovanje nove graficke zbog video igara je postalo besmisleno vec neko vreme...mislio sam da ce prelazak na 1080p mozda promeniti stvar, kad ono - nece :D Uostalom, velika vecina igara koje plivaju po raznoraznim platformama je ionako crap sklepan nabrzaka da se sto vise i sto pre zaradi.
A sa druge strane - cekam GPGPU. HOCU da mi grafika preuzme SAV posao za koji koristim CPU. Ako im za takav GPU treba toliko i toliko vremena za razvoj, nek bude tako. Vredece. Pre cu da dam i tih, kako pojedini predvidjaju, 600 evra za GPU koji ce da tera VRay RT 30 puta brze (i vise) od budjavog CPU-a koji kosta preko 1000 evra.
Zato licno mislim da tako nesto jedino i moze da dodje od kompanije koja ne pravi CPU. Zna se ko je prvi i poceo sa forsiranjem istog, inace. Sto bi Intel i AMD sekli granu na kojoj sede i odjednom pocele da forsiraju tako nesto, u godinama kada im CPU biznis donosi glavninu zarade. Da, hvalice se time da njihov GPU podrzava "to i to" ali od toga ne verujem da ce ista ispasti, kao sto nije ni do sad.
I naravno da ce ih CPU igraci prozivati iz raznoraznih razloga jer im ovi oduzimaju posao na njihovom terenu - spuerracunari, laboratorije, zahtevniji korisnici, render-farme...

Zato, nemam nic' protiv da kompanija koja pravi GPU forsira GPGPU "naustrb PC gejmerstva" jer je ono vec odavno zadnja rupa na svirali M$-u...kad ce vec 600e kartica zvrjati u leru sa multiplamtfrm igrama, neka bar brise patos sa CPU-om u ostalim poslovima koje radim. (Pod uslovom da ne za*eru stvar sa ogranicenjem tipa "e, druze, za VRayRT ti treba Tesla, ovo ti je samo za 'grice" lol...)

Ako nV propadne zbog toga, jerbo nema DX11 jos uvek, a cip joj je komplikovaniji od konkurentskog, zbog fprsiranja GPGPUa, nek ide zivot...nece biti prvi niti poslednji put da kompanija koja uradi pravu stvar propadne:d
 
Pa, bitan je dobar omjer jednih i drugih funkcionalnosti u odnosu na ono što se u nekom kratkom roku traži na tržištu :).
 
Pa, jeste bitno, ali zbog cega...ako idemo po DirectX-liniji, meni je svejedno imao 60 ili 120 FPS-a u 1080p u nekoj multiplatformi.
Sto se toga tice, taj novi GPU ce mi sasvim sigurno biti "zadovoljavajuci" jer se i ovaj 8800GT drzi sasvim fino:d ... Ostaje da se vidi hoce li GPGPU napredovati i dalje, i unaprediti stvari koje koci CPU i njegov puz-napredak za jos 1000$ pride...
 
Manje vise bi se slozio sa nex-om oko GPGPU dela, ali mislim da DX11 ,ako bude prihvacen od strane developera, zaista ima potencijal da unese svezinu u PC naslove. Svidja mi se sto ce koristiti tesslation unite na gpu-ima za displacement obogacujuci modele dodatnim detaljima u geometriji, voda na DIRT demou izgeda fino, compute shader ce verovatno resiti implementaciju univerzalne fizike na svim DX11 platformama...

Na kraju ostaje pitanje koliko kosta implementiranje DX11 feature-a i da li ce se isplatiti developerima da u buducim multiplatform naslovima ponude to kao dodatak PC korisnicima.

Mada, mozda se pojavi i neki biser medju PC only naslovima:d
 
Nazad
Vrh Dno