More and more of us are playing games. This growth is helped, in part, by the huge variety and increased accessibility of content that has come about in the last decade or so. Players have access to just about every imaginable genre and format, including traditional AAA video games, casual titles, and even social games.
Online casinos also have many popular options, including a myriad of video slots and classic card and table games like roulette. These games, in particular, have enjoyed a surge in demand in recent years as sites like oddschecker have made it easier for players to find the many casino bonuses and other promotions that these sites offer to new customers.
In addition to there being a rising number of playing options available for gamers, there continues to be a big uptick in quality.
Over the years, we’ve seen video games grow in size and complexity. Compare Pac-Man and its prescribed mazes that require players to navigate around while avoiding ghosts, to the open-world metropolis they can explore in Grand Theft Auto V. The difference is night and day, with just about every area of design seeing amplitudes of progress over the 33 years between their respective release dates.
Online multiplayer modes have also added to the size and complexity of games, creating new challenges for players since they can compete against other skilled people from around the world rather than the often flawed computer AI characters.
Improving Visuals
But perhaps one of the most notable areas of progress has been in the visuals of games.
In early titles, developers had to get very creative to make characters and objects that were easy to recognize, despite the limited color palettes and the small number of pixels they had available.
One of the best examples of this is Mario. While his red and blue get-up makes him one of the most recognizable video game characters of all time, much of this was originally out of necessity. His hat was created to avoid the need to draw and animate hair, something that is still complicated today, while his mustache was devised to help make the features on his face easier to spot.
Today, however, developers have many more tools and much more powerful hardware to help create better-looking characters, beautiful backdrops, and stunning vehicles and objects.
To fully appreciate their work, some players spend significant sums on building the most capable gaming PC, allowing them to turn the graphics settings all the way up to the max.
Yet, despite all this advancement and investment, it’s still possible to tell the difference between games and the realism captured by a camera. Could this change? Or will there forever be a gap between the two?
Recent Developments
Over the years, we’ve seen major steps forward in video game graphics, such as when developers began producing 3D games rather than the two-dimensional side-on titles we saw in the 1980s and 1990s.
Sizable increases in the number of colors and screen resolutions also made a big difference in the quality of gaming graphics.
Some of the more recent developments have been around lighting and shadows. Without this, it’s impossible for a game’s visuals to come close to looking realistic, as our eyes are used to objects being lit from individual sources rather than the uniform illumination seen in many video games.
Different techniques have been used for this, but the most recent approach has been to use ray tracing. This works by drawing many individual rays of light from a single source and tracing the way that it bounces off objects. It is about as close to the real world as is possible with current technology and, when done right, looks incredibly life-like.
It’s not, however, perfect and more work needs to take place.
Within a Decade
Speaking in July 2021, Strauss Zelnick, the CEO of Take-Two, the company that produces hits like Grand Theft Auto, Red Dead Redemption, and Bully, said that he believes it may be possible to have photo-realistic graphics that “look exactly like live-action” within a decade.
With nine years left on his timeline, the clock is ticking. Yet, at the time, Zelnick said that he didn’t know how that would happen but that he felt that the technology will be available by then. Some may, therefore, argue that he’s writing cheques that he can’t cash or even living in a dreamland.
There’s perhaps more to his optimism than this though. Technically, the technology to create photo-realistic graphics does already exist.
Computer-generated imagery (CGI) is used to create photorealistic content for movies, television shows, and other video content. This is actually very common in the creative industry, but it requires a lot of computing power and it helps to have knowledge of photography.
We can see examples of this in movies like Interstellar (2014), War for the Planet of the Apes (2017), and Avengers: End Game (2019).
While this CGI looks great, it can’t currently be transferred to video games.
The reason for this is that producing this content for movies requires a huge amount of grunt work that’s undertaken by banks of powerful computers. Despite a platoon of PCs, it can still take hours or even days to complete the rendering on just a few minutes of footage.
Video game developers don’t have such luxuries. While they may do a lot of work upfront, much of the rendering work for their game’s graphics is done on the fly, reacting to the inputs of the player and the behavior of the NPCs.
This means that, at present, there isn’t the technology available to crunch all these numbers in real-time.
Moore’s Law to the Rescue
If you consider the continued advancement in computer processing power that we have seen over the last few decades, Zelnick’s predictions may not be so fanciful after all.
This is because Moore’s Law, an observation by a co-founder of Intel, suggests that chip manufacturers can fit twice as many transistors in an area of silicon every two years.
Therefore, by 2031, computer chips should have 32 times more transistors than today, giving gaming hardware the chance to catch up with the CEO’s predictions.