DF Weekly: Can the new RTX 5070 really match the performance of the RTX 4090?
This week’s Digital Foundry Direct comes straight from Las Vegas, where Oliver McKenzie and Alex Battaglia share their impressions of the technologies on display on the show floor – starting with a look at the Nvidia RTX 50-series keynote. It is this statement from this presentation that I am going to explore in this week’s blog. Does the upcoming RTX 5070 really offer the “performance” of the RTX 4090 series? Based on the established jargon, the answer is of course no, but the reasons for this statement are quite simple and forward-looking for the next era: large jumps in frame rates through hardware are decreasing, and, like it or not, the future is leaning more towards software, with In this, machine learning plays a leading role.
To address Nvidia’s claims with the RTX 5070, the notion of performance parity is based entirely on DLSS 4’s multi-frame generation implementation, where the new $549 GPU is supposedly tested against an RTX 4090 at the same resolution and settings, but using only a single frame. generation. In every quantifiable way, the statement does not stand up to scrutiny. Even when running on a more modern architecture and process node, the RTX 5070’s 6,144 CUDA cores cannot match the RTX 4090’s 16,384 cores, meaning a similar discrepancy in terms of RT and Tensor Cores. The 384-bit memory interface on the RTX 4090 gives way to a 192-bit memory bus on the RTX 5070. While the RTX 5070 has faster memory, it only has 12GB compared to the previous machine’s 24GB.
Simply put, without DLSS 4, the RTX 5070 won’t match the RTX 4090 – unless we’re talking about relatively simple games running at the limit of your display’s refresh rate (Vsync or capping). On the surface, Nvidia’s claims are laughable, but hands-on reports from CES like this one from PC Games N are complimentary, detailing a Marvel Rivals demo showing the 5070 with frame generation showing much higher frame rates than the 4090 . although, apparently, parity is more common. I’m guessing that in this demo, Marvel Rivals is CPU-limited, where frame generation shows the biggest framerate multipliers, but even so, DLSS 4 leaves a positive impression and the framerate claim holds up, albeit with big Asterix AI attached.
- 0:00:00 Introduction
- 0:01:47 Nvidia CES demos – RTX Mega Geometry
- 0:14:25 Neural materials RTX
- 0:21:24 RTX neural faces and RTX hair
- 0:31:37 ReStir path tracing + Mega geometry
- 0:36:49 Black State with DLSS 4
- 0:42:47 Alan Wake 2 with DLSS 4
- 0:46:01 Reflex 2 in the final
- 0:53:22 AMD at CES: AI noise reduction demo, Lenovo Legion Go handhelds
- 1:03:51 Razer at CES: laptop cooling pad, new Razer Blade
- 1:11:30 Asus and Intel at CES
- 1:17:29 CES displays: Mini-LED, Micro-LED, OLED + monitor sin!
- 1:30:07 Supporter Question 1: Will Switch 2 support DLSS 4?
- 1:32:22 Supporter Question 2: Have you seen the Switch 2 mockups at CES?
- 1:33:56 Question 3 from a supporter: Can you test DLSS on a real ultra-high resolution image?
- 1:37:52 Proponent Q4: Why would a developer use Nvidia’s rendering methods instead of their UE5 equivalents?
- 1:40:38 Supporter Question 5: Will multi-frame generation help solve the problem of freezing in the game?
- 1:42:05 Supporter Question 6: Will multi-frame generation make VRR obsolete?
- 1:44:27 Supporter Question 7: Does Sony regret sticking with AMD in its console business?
- 1:49:49 Supporter Question 8: What do you think of FF7 Rebirth’s PC specs?
- 1:52:37 Supporter Question 9: What’s the craziest thing you saw at CES?
I’m willing to bet that Nvidia made the 5070/4090 comparison knowing that the idea would come under scrutiny. The firm is clearly confident that all upcoming testing will prove positive for its claims, even if the negative aspects of the 5070 experience are highlighted. And at a hard SEO level, the more 5070/4090 comparisons it receives, the more algorithmic momentum the original claim will receive.
So, after spending some time with DLSS 4, I can predict what these comparisons will look like. First of all, in games that actually use more than 12GB of RAM, the limitations of the RTX 5070 will result in either worse graphics or lower frame rates. Indiana Jones and the Great Circle would be an interesting example. Other games may well be at frame rate parity with the RTX 4090, but input lag will be higher and there may be more frame generation artifacts. The higher the base frame rate used when generating frames, the less noticeable the input lag will be, while the resulting output frame rate may be so high that frame generation artifacts become less noticeable – but the RTX 4090 should look better and play better.
At best, it might be fun to test out Marvel Rivals, but it’s the “full RT” path-tracing games like Alan Wake 2 and Cyberpunk 2077 that will really put the 5070/4090 head to head – and ultimately, this comparison is key . While path tracing can be used even on 4070-class hardware without frame generation, all the bells and whistles are perceived to be 4090-only, and the promise is that this level of “performance” is now available for $549. product.
Manage your cookie settings
Increased latency is a key frame generation limitation that DLSS 4 does not fully address. The principle of operation of the technology is simple: a standard rendered frame is generated, then the next one is buffered (which leads to a delay). Frame-gen then calculates one, two, or three interpolated frames (you can choose using DLSS 4) that are displayed between the standard rendered frames. Nvidia requires the use of its Reflex latency reduction technology to mitigate the difference. The same system is used to generate DLSS multiframes, with one difference – additional interpolated frames are generated very, very quickly. In my tests, latency increased by approximately 6.4ms by adding two more frames compared to the existing frame generation technology.
At this point, I should pause and correct some misunderstandings that I noticed in the YouTube comments on the DLSS 4 video we released last week, which is posted below. Latency metrics were confused with frame times, so the idea of Cyberpunk 2077 having 50ms latency was compared to a frame rate of 20fps. The truth is, every game has different levels of latency, even when targeting the same level of performance – as demonstrated by Tom Morgan during his PS4-era testing. Tom’s latency metrics showed that several games running locked at 60fps provided very different input lag: Even though both games ran locked at 60fps, Call of Duty: Infinite Warfare had the advantage in latency by 47.5 ms (!) over Doom 2016 and an advantage of 37.3 ms over Doom 2016. eSports darling, Overwatch. The truth is that latency matters, but in many scenarios the difference is slightly higher than the latency added by frame generation. The frame rate is clearly more noticeable, the increased input lag is less noticeable. When has anyone complained about lag in Doom or Overwatch?
Next I want to talk about the word “performance”. When we first introduced DLSS 3, we talked about the performance multipliers that come with frame generation. Nvidia continues to do this. However, feedback from the community got me thinking about whether “performance” is the most appropriate term. Throughout the 3D era, increases in frame rates have been accompanied by decreases in input lag. This is an increase in productivity all areas. Frame generation Really performance improvement if latency is higher?
That was a good point, then I thought about it further: if 2x Frame-Gen doesn’t provide 2x the frame rate, aren’t we actually getting less standard rendered frames? When the RTX 4090 review came out, I took the feedback into account and now refer to the frame rate increase as a frame rate increase rather than a performance increase. I make this distinction because performance is not just frame rate, even if strictly speaking it is a measurement of GPU “performance”, just in different terms. So-called “fake” footage? This is only a problem when the artifacts are obvious and noticeable. Like latency, this remains an issue, but I expect both issues to be resolved eventually.
And let’s be clear: Nvidia’s announcement of the 5070/4090 is the firm laying the groundwork for a new paradigm that will be adopted by all vendors. That’s because it’s becoming increasingly clear that the future of graphics technology will focus more on artificial intelligence and ray tracing rather than a larger GPU with more shaders. That’s as it should be, simply because the cost of the latest semiconductor manufacturing processes is only moving in one direction – but if you want more “performance” in the traditional sense, you’ll have to pay more for it. And even then, according to this week’s Direct, you’ll find that many of the advances being made are driven solely by machine learning.
Finally, is this just one big Nvidia scam, another refrain we hear often? Well, if the technologies discussed in this week’s Direct don’t excite you, I’ll leave you with a quote from Mark Cerny, working, of course, in partnership with AMD: “The strategies we used in PS4 Pro or something like that, . .. I wouldn’t say it’s reached the limit, but it’s mostly about increasing the GPU size or speeding up the memory. So, if we look to the future, I think we will see improvements in ray tracing. there’s a lot going on, that’s all we can do with ML…”