The latest graphics cards offer great performance, but are they destroying the games industry?
Ask any self-respecting gamer what is more important – graphics or gameplay – and they’ll tell you that the latter is far more significant. Why, then, does the industry at large make such a concerted effort to develop new graphics technology at the expense of more meaningful innovations?
The answer lies with graphics card manufacturers and not, as you might expect, with games designers. Whereas semiconductor firms such as Intel and AMD will soon struggle to keep up with Moore’s Law (which states that processing power will roughly double every 18 months), graphics card makers double performance as frequently as every six months.
This is largely because of the architectural differences between CPUs and Graphics Processing Units (GPUs). A detailed comparison is beyond the scope of this column, but the streaming model of computation used by GPUs is a powerful way of processing data when only one or two transformations are to be performed on each piece of data (such as when manipulating a pixel).
Add to this the fact that CPU designers must contend with numerous other complications that GPU designers don’t in order to improve performance.
The development of graphics cards is inextricably linked to the development of games. In the past, it was games makers that demanded more processing power in order to facilitate more diverse playing experiences.
Today the tables have turned, and it is the graphics card makers that are challenging games developers to create ever-more complex titles.
Three years ago I would have said this change in roles was critical to the growth of the industry, but now I am starting to believe it could ultimately destroy it.
Many reviews fawn at the graphical beauty of games such as Far Cry or Doom 3, but there are few that criticise their lack of imagination. Both these titles provide very good game play experiences and outstanding graphics, but it is disheartening to see that, despite so many technical advances, the developers are still stuck in a design rut.
Despite its outstanding visuals, below Doom 3‘s surface is essentially a reheated update of a game that most of us have already played to death. Far Cry, a game that many would consider a modern classic, fares only slightly better.
Like Doom 3, it uses a versatile games engine that allows players to interact realistically with their environment, but the potential this offers isn’t truly explored.
Depressingly, there is little to suggest that the trend of good-looking games that lack innovation is likely to change. Games publishers are increasingly being bullied into incorporating visual enhancements at the expense of originality in game play.
A typical game can cost millions of pounds and take several years to produce. Most games companies have very limited financial resources and face tremendous pressure from investors to deliver commercially successful titles. As a result, consumers are seeing an increasing number of rehashed ideas and sequels.
Designers have become far less willing to experiment and the release of more powerful graphics cards and the next generation of games consoles only compounds this.
Electronic Arts, one of the largest games publishers, has voiced its concerns that the cost of developing games for Playstation 3 and Xbox 2 could be up to 200 per cent higher than for the current hardware.
Smaller companies have already begun to feel the pinch. Eidos, the firm responsible for the massively successful Tomb Raider series, looks set to be swallowed by a larger entity.
While there is nothing intrinsically wrong with corporate buyouts, wealthy publishers are generally more concerned with attracting existing customers than the growth of the industry. As such, they tend to release titles based on popular films or sports, with a smaller focus on original games.
At the time of writing, the top 20 all-formats games chart had only one original title. The rest were sequels or based on some form of sport or movie tie-in.
This erosion of choice is detrimental to the future of the games industry. Smaller, more innovative companies will be forced to close their doors or be consumed by industry giants. This will ultimately change a once-vibrant industry into a market unwilling to back any project that lacks a guaranteed audience.
So what’s the future of gaming? Are we headed for the bleak world of The Sims version 27? I believe that less blame should be placed on the people who actually create games, and more on those who perpetuate the culture of graphics innovation.
Nintendo, the oldest and arguably most innovative maker of games hardware and software, shares this sentiment. The successor to its current Gamecube console, codenamed Revolution, looks set to focus less on using next-generation technology, and more on creating a next-generation way of playing games.
The idea of halting technological enhancements to save an area of the computer software industry may sound bizarre, particularly coming from an IT journalist, but if anyone has any better ideas, please let me know.