Yesterday a sample of the Geforce 9400 GT arrived at our office. Our benchmark analysis of the Zotac card reveals what the low-cost solution is capable of - and what exceeds the capabilities.
Geforce 9400 GT reviewed [Source: view picture gallery]
The Geforce 9400 GT is supposed to replace the 8500 GT respectively 8400 GS. According to its position in the lower performance region the card is likely to cost less than €50 (about $73). But you will have to accept some restrictions.
Here you can see the technical specifications:
Zotac Geforce 9400 GT Zone Edition: 2.5 ns VRAM (picture: PCGH) [Source: view picture gallery]
Zotac Geforce 9400 GT Zone Edition: silent, because of the passive cooler (picture: PCGH) [Source: view picture gallery]
The theoretical performance reveals that the card won't be able to compete with the 9500 GT. The memory interface stays at 128 bit indeed, but in contrast to the more expensive model, only DDR2 memory with 400 MHz is used. Because of that the memory bandwidth is reduced from 25.6 to 12.8 GiByte per second. The numbers of Shader ALUs and texture units (TMUs) have been halved - their clock speed stays at 550 MHz. All this suggests that the Geforce 9500 GT will be twice as fast as the new Geforce 9400 GT.
Our test system:
Test system and configuration
Geforce 9400 GT: GPU-Z delivers some wrong details, but the card carries a G96b with 55 nm structure. (picture: PCGH) [Source: view picture gallery]
CPU: Intel Core 2 Duo E8500 @ 3,600 MHz (400x9)
Board: Asus P5N-D (Nforce 750i SLI chipset)
RAM: 4 x 1,024 MiByte DDR2-800 (5-5-5-15)
OS: Windows Vista 64 Bit with SP1
• Forceware 177.79 (HQ)
• Catalyst 8.7 (AI def.)
• Geforce 9400 GT
, 512 MiB DDR2, 550/1.350/400 MHz
• Geforce 9500 GT, 512 MiB GDDR3, 550/1.375/800 MHz
• Geforce 8600 GT, 256 MiB GDDR3, 600/1.188/700 MHz
• Radeon HD 3850, 256 MiB GDDR3, 668/828 MHz
• Radeon HD 3650, 512 MiB GDDR3, 750/800 MHz
• Radeon HD 3450, 256 MiB GDDR2, 594/495 MHz
As usual we use the High Quality settings of the Nvidia driver for all the Geforce cards. For AMD's Radeon models we stick to the Default A.I., since otherwise bugfixes and custom-designed optimizations would be ineffective.