When the Nvidia GeForce RTX 4090 was announced with an eye-watering $1,600 price tag, memes spread like wildfire. While $1,600 is just a little an excessive amount of for many gamers to spend on a single component (most PC construct budgets I see are lower than that for the entire PC). I couldn’t help but be intrigued on the potential performance improvements for my work—you realize, the 3D and AI-accelerated tasks I spend most of my day doing as a part of managing the EposVox YouTube channel, as an alternative of gaming.
Spoiler alert: The GeForce RTX 4090’s content creation performance is magical. In quite a number of cases, the typically nonsense “2X performance increase” is definitely true. But not in all places.
Let’s dig in.
Our test setup
Most of my benchmarking was performed on this test bench:
Intel Core i9-12900k CPU32GB Corsair Vengeance DDR5 5200MT/s RAMASUS ROG STRIX Z690-E Gaming Wifi MotherboardEVGA G3 850W PSUSource files stored on a PCIe gen 4 NVMe SSD
My objectives were to see how much of an upgrade the RTX 4090 could be over the previous generation GeForce RTX 3090, in addition to the RTX Titan (the cardboard I used to be primarily working on before). The RTX 3090 really saw minimal improvements over the RTX Titan for my use cases, so I wasn’t sure if the 4090 would really be an enormous leap. For some more hardcore testing later as I’ll mention, testing was done on this test bench.
AMD Threadripper Pro 3975WX CPU256GB Kingston ECC DDR4 RAMASUS WRX80 SAGE MotherboardBeQuiet 1400W PSUSource files stored on a PCIe gen 4 NVMe SSD
Each test featured each GPU in the identical config as to not mix results.
Video production
My day job is, in fact, creating YouTube content, so one in all the primary things I needed to test could be the advantages I would see for creating video content. Using PugetBench from the workstation builders at Puget Systems, Adobe Premiere Pro sees very minimal performance improvement with the RTX 4090 (as to be expected at this point).
Adam Taylor/IDG

Adam Taylor/IDG
BlackMagic DaVinci Resolve, nonetheless, saw significant performance improvements across the board over each the RTX 3090 and RTX Titan. This is smart, as Resolve is much more optimized for GPU workflows than Premiere Pro. Renders were much faster thanks each to the upper 3D compute on effects, but additionally the faster encoding hardware onboard—and the overall playback and workflow was rather more “snappier” feeling and responsive.

Adam Taylor/IDG

Adam Taylor/IDG
I’ve been editing with the GeForce RTX 4090 for a number of weeks now, and the experience has been great—though I needed to revert back to the general public release of Resolve so I haven’t been capable of export using the AV1 encoder for many of my videos.

Adam Taylor/IDG
I also desired to test to see if the AI hardware improvements would profit Resolve’s Magic Mask tool for rotoscoping, or their face tracker for the Face Refinement plugin. Admittedly, I used to be hoping to see more improvements from the RTX 4090 here, but there’s an improvement, which saves me time and is a win. These tasks are tedious and slow, so any minutes I can shave off makes my life easier. Perhaps in time more optimization will be done specific to the brand new architecture changes in Lovelace (the RTX 40-series’ underlying GPU architecture codename).

Adam Taylor/IDG

Adam Taylor/IDG
The performance in my original Resolve benchmarks impressed me enough that I made a decision to construct a second-tier test using my Threadripper Pro workstation; rendering and exporting a 8K video with 8K RAW source footage, a lot of effects and Super Scale (Resolve’s internal “smart” upscaler) on 4K footage, etc. This project is not any joke, the conventional high-tier gaming cards just errored out because their lower VRAM quantities couldn’t handle the project— this bumps the RTX 3060, 2080, and 3080 out of the running. But putting the 24GB VRAM monsters to the test, the RTX 4090 exported the projects a complete 8 minutes faster than the test. Eight minutes. That type of time scaling is game changing for single person workflows like mine.

Adam Taylor/IDG
In the event you’re a high res or effect-heavy video editor, the RTX 4090 is already going to avoid wasting you hours of waiting and slower working, right out of the gate and we haven’t even talked about encoding speeds yet.
Video encoding
For simply transcoding video in H.264 and H.265, the GeForce RTX 4090 also just runs laps around previous Nvidia GPUs. H.265 is the one area where AMD out-performs Nvidia in encoder speed (though not necessarily quality) as ever for the reason that Radeon 5000 GPUs, AMD’s HEVC encoder has been blazing fast.

Adam Taylor/IDG
The brand new Ada Lovelace architecture also comes with recent dual encoder chips that individually already run a good bit faster for H.264 and H.265 encoding than Ampere and Turing, but in addition they encode AV1—the brand new, open-source video codec from the Alliance for Open Media.

Adam Taylor/IDG
AV1 is the long run of web-streamed video, with most major firms involved in media streaming also being members of the consortium. The goal is to create a highly efficient (as in, higher quality per bit) video codec that may meet the needs of the trendy high resolution, high frame rate, and HDR streaming world, while avoiding the high licensing and patent costs related to H.265 (HEVC) and H.266 codecs. Intel was first to market with hardware AV1 encoders with their Arc GPUs as I covered for PCWorld here—now Nvidia brings it to their GPUs.
I cannot get completely accurate quality comparisons between Intel and Nvidia’s AV1 encoders yet attributable to limited software support. From the essential tests I could do, Nvidia’s AV1 encodes are on par with Intel’s—but I even have since came upon that the encoder implementations in even the software I can use them in each could use some fine-tuning to best represent each side.
Performance-wise, AV1 performs about as fast as H.265/HEVC on the RTX 4090. Which is effective. But the brand new dual encoder chips allow each H.265 and AV1 for use to encode 8K60 video, or simply faster 4K60 video. They do that by splitting up the video frames into horizontal halves, encoding the halves on the separate chips, after which stitching back together before finalizing the stream. This seems like how Intel’s Hyper Encode was presupposed to work—Hyper Encoder as an alternative separating GOPs (Group of Pictures or frames) among the many iGPU and dGPU with Arc—but in all of my tests, I only found Hyper Encode to decelerate the method, somewhat than speeding it up. (Plus it didn’t work with AV1.)
Streaming
Because of this of the aforementioned improvements in encoder speed, streaming and recording your screen, camera, or gameplay is a far, much better experience. This comes with an update to the NVENC encoder SDK inside OBS Studio, now presenting users with 7 presets (akin to X264’s “CPU Usage Presets”) scaling from P1 being the fastest/lowest quality to P7 being the slowest/highest quality. In my testing on this video, P6 and P7 were mainly the very same result on RTX 2000, 3000, and 4000 GPUs, and competed with X264 VerySlow in quality.
While game streaming, I saw mostly the identical performance recording as other GPUs in Spider-Man Remastered (though other games will see more advantages) with H.264, but then encoding with AV1… had negligible impact on game performance in any respect. It was virtually transparent. You wouldn’t even know you were recording, even on the best quality preset. I even had enough headroom to set OBS to an 8K canvas and upscale my 1440p game capture to 8k inside OBS and record using the twin encoder chips, and still not see a major impact.

Adam Taylor/IDG
Unfortunately, while Nvidia’s Shadowplay feature does get 8K60 support on Lovelace via the twin encoders, only HEVC is supported presently. Hopefully AV1 will be implemented—and supported for all resolutions, as HEVC only works for 8K or HDR as is—soon.
I also found that the GeForce RTX 4090 is now fast enough to do completely lossless 4:4:4 HEVC recording at 4K 60FPS—something prior generations simply cannot do. 4:4:4 chroma subsampling is vital for maintaining text clarity and for keeping the image intact when zooming in on small elements like I do for videos, and at 4K it’s type of been a “white whale” of mine, because the throughput on RTX 2000/3000 hasn’t been enough or the OBS implementation of 4:4:4 isn’t optimized enough. Unfortunately 4:4:4 isn’t possible in AV1 on these cards, in any respect.
Photo editing
Photo editing sees virtually zero improvement on the GeForce RTX 4090. There’s a slight rating increase on the Adobe Photoshop PugetBench tests versus previous generations, but nothing value buying a recent card over.

Adam Taylor/IDG

Adam Taylor/IDG
Same goes for Lightroom Classic. Shame.

Adam Taylor/IDG
But for those who’re an Affinity Photo user, the RTX 4090 far outperforms other GPUs, I’m unsure whether to interpret that Affinity is kind of optimized on this case.
A.I.
AI is all the fashion nowadays, and AI upscalers are in high demand without delay. Theoretically, the GeForce RTX 4090’s improved AI hardware would profit these workflows—and we mostly see this ring true. The RTX 4090 tops the charts for fastest upscaling in Topaz Labs Video Enhance AI and Gigapixel, in addition to ON1 Resize AI 2022.

Adam Taylor/IDG

Adam Taylor/IDG
But Topaz’s recent PhotoAI app sees weirdly low performance in all Nvidia cards. I’ve been told this may occasionally be a bug, but a fix has yet to be distributed.

Adam Taylor/IDG
Using FlowFrames to AI interpolate 60FPS footage to 120FPS for slow-mo usage, the RTX 4090 sees a 20 percent speed-up in comparison with the RTX 4090. This is sweet because it is, but I’ve been told by users within the FlowFrames Discord server that this might theoretically scale more as optimizations for Lovelace are developed.

Adam Taylor/IDG
What about generating AI Art? I tested N00mkrad’s Stable Diffusion GUI and located that the GeForce RTX 4090 blew away all previous GPUs in each half and full-precision generation—and once more have been told the outcomes “ought to be” higher, even. Exciting times.

Adam Taylor/IDG
3D rendering
Alright, the daring “2X Faster” claims are here. I desired to test 3D workflows on my Threadripper Pro rig, since I’ve been getting an increasing number of into these recent tools in 2022.

Adam Taylor/IDG

Adam Taylor/IDG
Testing Blender, each the Monster and Classroom benchmark scenes have the RTX 4090 rendering twice as fast because the RTX 3090, with the Junkshop scene rendering just shy of 2X faster.

Adam Taylor/IDG
This translates not only to faster final renders—which at scale is completely massive—but a much smoother creative process as all of the particular preview/viewport work might be more fluid and responsive, too, and you may more easily preview the ultimate results without waiting ceaselessly.

Adam Taylor/IDG
Benchmarking Octane—a renderer utilized by 3D artists and VFX creators in Cinema4D, Blender, and Unity—again has the RTX 4090 running twice as fast because the RTX 3090.

Adam Taylor/IDG
…and again, the identical goes for V-Ray in each CUDA and RTX workflows.
Bottom line: The GeForce RTX 4090 offers outstanding value to content creators
That’s where the worth is. The Titan RTX was $2,500, and it was already phenomenal to get that performance for $1,500 with the RTX 3090. Now for $100 more, the GeForce RTX 4090 runs laps around prior GPUs in ways in which have truly game-changing impacts on workflows for creators of every kind.
This might explain why the formerly-known-as-Quadro line of cards got far less emphasis over the past few years, too. Why buy a $5,000+ graphics card when you may get the identical performance (or more, Quadros were never super fast, they only had a lot of VRAM) for $1,600?
Obviously the pricing of the $1,200 RTX 4080 and recently un-launched $899 4080 12GB can still be concerning until we see independent testing numbers, however the GeForce RTX 4090 might just be the primary time marketing has boasted of “2x faster” performance on a product and I feel like I’ve actually received that promise. Especially for results inside my area of interest work interests as an alternative of mainstream tools or gaming? That is awesome.
Pure gamers probably shouldn’t spend $1,600 on a graphics card, unless feeding a high refresh rate 4K monitor with no compromises is a goal. But for those who’re all for getting real, nitty-gritty content creation work done fast, the GeForce RTX 4090 can’t be beat—and it’ll make you grin from ear to ear during any late-night Call of Duty sessions you hop into, as well.
Source link