• Welcome to the Angry Joe Show Army!

    Join our community of gamers passionate about our community and our hobby! Whether it's playing, discussing, or watching games, regardless of platform, genre, or location, we have a place for you, always!

  • PS4 Forum

    The AJSA Playstation 4 Division: Game Nights and More!

    The AJSA is on Playstation 4! Join us for weekly Game Nights with a selection of the best games the PS4 has to offer!

  • XBO Forum

    The AJSA Xbox One Division: We Got You Covered!

    The AJSA Xbox One Division is ready to connect with you on XBox Live with a ton of events for the best Xbox games!

  • News Archive

    The Best News from the Best Sites, Every Week.

    The AJSA News Collection Team is hard at work condensing a week's worth of news into one giant-sze digest for you to chew on and discuss! Links to source articles are always provided!

  • More Info

    The AJSA Expeditionary Force: Deploying to Play the Best PC Games!

    The elite vanguard of the AJSA, the Expeditionary Force (EF) chooses a new PC game every week! Join us for weekly events and help decide if the game has a future in the AJSA.

  • The Team

    Streaming Now: The AJSA Stream Team

    Joe can't stream every game, but our talented AJSA Stream Team covers a wide variety of games and personalities! Check them out, and show them some AJSA Love!

  • The Tube

    The AJSA Community YouTube Channel

    Featuring news, gameplay clips, and more from the community! The Community is a chance to showcase the best moments in AJSA Gaming!

GoodOldSmurf

2015 and the "new generation PC components"

New gen gaming PC hardware poll   18 members have voted

  1. 1. What do you think the performance gap between current generation and next generation will be?

    • The difference will be minimal
      5
    • The difference will be maximal
      5
    • Remains to be seen
      8
  2. 2. Do you think Star Citizen will dramatically force new hardware to be far superior (perhaps 50% performance increase) to current hardware available?

    • Yep
      5
    • Nope
      6
    • Not sure
      7
  3. 3. Should next generation components offer a 50% increase in performance, would you upgrade?

    • Yes
      13
    • No
      1
    • I have no idea
      4

Please sign in or register to vote in this poll.

36 posts in this topic

If hex cores come down to quad core price and likewise Octa core's comes down to Hex core prices. I would see that 10 year being a bit of an over estimation.

 

Right now it's all about the price. That's why we won't see 'good games' recommending 6 or more cores, and requiring at least mid-range Quad Core CPUs.

ThE_MarD likes this

Share this post


Link to post
Share on other sites

If hex cores come down to quad core price and likewise Octa core's comes down to Hex core prices. I would see that 10 year being a bit of an over estimation.

 

Right now it's all about the price. That's why we won't see 'good games' recommending 6 or more cores, and requiring at least mid-range Quad Core CPUs.

honestly, 6-cores wont make it to "standard" and will be skipped, much like the "unofficial 3-core skip" that happened to be AMD-only since only AMD manufactured 3-cores as far as im aware of

 

you have to look at the architecture, clockspeed and cache (and even chip socket in some cases) to understand why 6-cores will be skipped

6-cores are essentially half-way between 4 and 8 cores in every way but the demand for twice the power of a quad-core keeps augmenting and if more games follow Star Citizen's ideology its going to go higher and higher skyhigh :P

 

still you make a valid point on the pricing so its not entirely impossible, i just doubt it happens ^_^

illutian likes this

Share this post


Link to post
Share on other sites

judging by the way NVIDIA has been doing business for the last 10 years, i really dont think that would be good news if they started competting all-out market however IBM? now that would be amazing considering they have always been aggressive on the competition :)

 

i cant even picture Star Citizen running on some "new generation" IBM components lel :lol:

What have nVidia done lately'

You mean trying to lock AMD out of the gaming world by making developers optimize games for nVidia cards?

Yes, that's crappy..... but they're also one nm generation ahead of AMD, and last I heard they were producing some sort of non-x86 CPU that ran fairly well.

AFAIK, IBM was researching into material alternatives for silicon, which for me would be rad.

non-silicon and trinary systems are going to be groundbreaking advancements for computing, IMO.

 

seeing that many games now require quad-cores to run and devs continuously complaining about quad-cores lacking too much of power, i really doubt we will stay with quads more than 10 years

Well.... 10 years in computing means 3-4 generations, so it'll be an eternity to come.

honestly, 6-cores wont make it to "standard" and will be skipped, much like the "unofficial 3-core skip" that happened to be AMD-only since only AMD manufactured 3-cores as far as im aware of

 

you have to look at the architecture, clockspeed and cache (and even chip socket in some cases) to understand why 6-cores will be skipped

6-cores are essentially half-way between 4 and 8 cores in every way but the demand for twice the power of a quad-core keeps augmenting and if more games follow Star Citizen's ideology its going to go higher and higher skyhigh :P

 

still you make a valid point on the pricing so its not entirely impossible, i just doubt it happens ^_^

Well, computers work on binary, so anything not aligning with it will have problems becoming standard.

3-core Athlons and Phenoms were actually 4-core CPUs with a locked core. Usually malfunctioning.

I also vaguely recall something about early 6-cores being 8-cores with 2 unlockable cores. Though, process could have matured into something simmilar.

Triple-channel memory was also a non-binary system that Intel had to drop for whatever reason.

This kind of platforms would only be viable for mainstream in a theorical trinary-computing environment.

Why? because if you're sending information to a core for processing, there should be a header telling where it's being processed. Optimal use requires the whole header to be used, not only half of it. As it stands, 1, 2, 4 and 8 cores use headers optimally, so those are bound to become standard, while others are mere steps in refining the process to achieve the next standard.

BTW, I can't recognize you without the green face, Smurf ^_^

Share this post


Link to post
Share on other sites

What have nVidia done lately'

You mean trying to lock AMD out of the gaming world by making developers optimize games for nVidia cards?

Yes, that's crappy..... but they're also one nm generation ahead of AMD, and last I heard they were producing some sort of non-x86 CPU that ran fairly well.

AFAIK, IBM was researching into material alternatives for silicon, which for me would be rad.

non-silicon and trinary systems are going to be groundbreaking advancements for computing, IMO.

 

Well.... 10 years in computing means 3-4 generations, so it'll be an eternity to come.

Well, computers work on binary, so anything not aligning with it will have problems becoming standard.

3-core Athlons and Phenoms were actually 4-core CPUs with a locked core. Usually malfunctioning.

I also vaguely recall something about early 6-cores being 8-cores with 2 unlockable cores. Though, process could have matured into something simmilar.

Triple-channel memory was also a non-binary system that Intel had to drop for whatever reason.

This kind of platforms would only be viable for mainstream in a theorical trinary-computing environment.

Why? because if you're sending information to a core for processing, there should be a header telling where it's being processed. Optimal use requires the whole header to be used, not only half of it. As it stands, 1, 2, 4 and 8 cores use headers optimally, so those are bound to become standard, while others are mere steps in refining the process to achieve the next standard.

BTW, I can't recognize you without the green face, Smurf ^_^

i find NVIDIA doesnt have much advance on AMD at all, they both offer pretty much the same thing now

 

10 years is 3-4 cycles of hardware not generations altho SOME people will say "generation", i find an x86 quad-core CPU with multiple variants is hardly "generations" and more like cycles (a new generation means the hardware is completely new or contains new stuff by total majority)

 

 

i know for the profile pic, i felt like having a bit of change :P

Share this post


Link to post
Share on other sites

i find NVIDIA doesnt have much advance on AMD at all, they both offer pretty much the same thing now

 

10 years is 3-4 cycles of hardware not generations altho SOME people will say "generation", i find an x86 quad-core CPU with multiple variants is hardly "generations" and more like cycles (a new generation means the hardware is completely new or contains new stuff by total majority)

 

 

i know for the profile pic, i felt like having a bit of change :P

I see IBM working on 14nm processes and nVidia launching 20nm hardware, and can only guess how more powerful AMD hardware could be if they weren't still stuck at 28nm.

One of their worst complaints has been overheating, and smaller processes reduces it greatly. So, having the Rage 300 soon to come at 20 nm.

I... see... so the jump from 16 to 32 bits, or changing CPUs and APUs would make a generation leap, while smaller increments would be cycles?

Share this post


Link to post
Share on other sites

I see IBM working on 14nm processes and nVidia launching 20nm hardware, and can only guess how more powerful AMD hardware could be if they weren't still stuck at 28nm.

One of their worst complaints has been overheating, and smaller processes reduces it greatly. So, having the Rage 300 soon to come at 20 nm.

I... see... so the jump from 16 to 32 bits, or changing CPUs and APUs would make a generation leap, while smaller increments would be cycles?

microns only means the hardware may require less power to produce as much at better speed (and that depends ALOT on the chip) so no it doesnt really matter

 

alot of people keep claiming lower micron means better performances, not necessarily, as such they have overlooked the Rage-200 series until reviewers tested the R9 270, R9 280 and R9 290

benchmarks will always say X item scores "better" and thats why i hate benchmarks: its virtual evaluation based on testing while live-testing (running a game live and filming or report it) is much more accurate, hard to dismiss and honest than a program that measures virtual performance

 

with that said, thats why i always say benchmarks are irrelevant and pointless as both AMD and NVIDIA GPUs will give pretty much the same Frames with the same features in the same resolution with the TDP and the price being the only difference, CPUs are a bit more complicated but the same still applies for the most part <_<

 

so in the end you go with the brand you have no or fewer issues (excluding random technical issues based on poor choices, ex; insufficient PSU or inefficient GPU brand) that offers you the best price (hence my preference to AMD for gaming)

Share this post


Link to post
Share on other sites

microns only means the hardware may require less power to produce as much at better speed (and that depends ALOT on the chip) so no it doesnt really matter

 

alot of people keep claiming lower micron means better performances, not necessarily, as such they have overlooked the Rage-200 series until reviewers tested the R9 270, R9 280 and R9 290

benchmarks will always say X item scores "better" and thats why i hate benchmarks: its virtual evaluation based on testing while live-testing (running a game live and filming or report it) is much more accurate, hard to dismiss and honest than a program that measures virtual performance

 

with that said, thats why i always say benchmarks are irrelevant and pointless as both AMD and NVIDIA GPUs will give pretty much the same Frames with the same features in the same resolution with the TDP and the price being the only difference, CPUs are a bit more complicated but the same still applies for the most part <_<

 

so in the end you go with the brand you have no or fewer issues (excluding random technical issues based on poor choices, ex; insufficient PSU or inefficient GPU brand) that offers you the best price (hence my preference to AMD for gaming)

AFAIK, smaller meassurements affect the ammount of electricity required for the chips to perform, which lowers heat production.

But I take it from reviews and editorials, and may be completely lost.

Many benchmarks favored certain chips over others, which make them flawed tests.

I don't discount them, but don't take them as absolute truth either.

And I agree on live testing. That's why I like channeñs like JayzTwoCents, where his "benchmark" to compare the i7 4th and 5th generation was the time it took for the PC to render a vid for his channel ^_^

Share this post


Link to post
Share on other sites

AFAIK, smaller meassurements affect the ammount of electricity required for the chips to perform, which lowers heat production.

thats exactly what i just said :lol:

 

it also raises the costs of manufacturing, the smaller it is the more expensive it is and not necessarily better in terms of performance

compare the 4th gen CPUs with the 3rd gen and 5th gen on Intel's side

 

you will notice alot of funny results, like the 5th gen having considerably lower micron scales yet generating alot more heat due to their cache (which is pretty much meaningless in gaming as a fast core clock speed is required along with a fast core in order to process and reprocess textures and geometries)

 

so yeh, stuff is dependent on cache so the upside with Intel is the faster single-thread but when it comes to 2-core games a faster clock speed will catch it back and in a 4-core games it will top it

an 8-core enabled game? anything dependent on cache will get demolished by a faster core clock speed ^_^

Share this post


Link to post
Share on other sites

Heyyo,

 

If hex cores come down to quad core price and likewise Octa core's comes down to Hex core prices. I would see that 10 year being a bit of an over estimation.

 

Right now it's all about the price. That's why we won't see 'good games' recommending 6 or more cores, and requiring at least mid-range Quad Core CPUs.

We're in the year 2015 and there's still a lot of games that don't even use full quad cores... The Intel Core2Quad Q6600 came out in 2006... Almost ten years later? there's still a bunch of games that are single-threaded lol fml (and fucksakes WarGaming!! looking at you especially with World of Tanks!)... but hey, it definitely is getting better.

Right now? an Intel i5's or FX-8300's is a great sweet-spot to be in. Otherwise? the Intel i3's and AMD FX-6300's are cost effective and will still power a high end GPU nicely with a decent overclock. That could change in the future... but I'm also talking across a very wide range of games.. and there isn't a crazy amount of games that fully utilize my i7-3770k's eight working threads.

 

I see IBM working on 14nm processes and nVidia launching 20nm hardware, and can only guess how more powerful AMD hardware could be if they weren't still stuck at 28nm.

One of their worst complaints has been overheating, and smaller processes reduces it greatly. So, having the Rage 300 soon to come at 20 nm.

I... see... so the jump from 16 to 32 bits, or changing CPUs and APUs would make a generation leap, while smaller increments would be cycles?

Now... shrinking the die size definitely optimizes performance... but look at the leap between NVIDIA's Kelper to Fermi to Maxwell... that's pretty impressive, and all three are running on the same 28nm. Massive leaps in efficiency were achieved just by revising their current architecture... so yeah, die size is a factor but not nearly as much as the arch.

Tbh I'm holding off upgrading unless a crazy good deal falls in my lap... NVIDIA's Pascal looks awesome on paper with the 3D memroy wafers seriously reducing the card's size and NVLink and full featured DirectX 12 capabilities but time will tell if 2016 will be the year of a major revamp on my PC... but for the sake of having DDR4 RAM and a CPU with more cores/threads? Both are a no. They won't make crazy good performance gains over my i7-3770k @ 4.2GHz and DDR3 2133MHz RAM... not for the costs of trying to sell my current parts to jam a better rig together. :P

AMD does need to revamp their cards more though... they're constantly cutting the prices on their cards due to the efficiency of NIVIDIA's GPUs and that's affecting their profit margins negatively...

Same goes for CPUs of AMD vs Intel... AMD is pushing core count over efficiency where Intel has more edge especially with games that aren't properly multi-threaded optimized... which like I said.. 2015.. still happens... almost a decade after the first Quad core CPUs started coming to consumers...

Dying Light is probably the main culprit of multi-threaded optimizations right meow... looking at lots of complaints most are from AMD CPU users where that single-threaded performance dip makes a difference. Techland have noted the issue and said they are working on a fix for a future patch though which is good... hopefully they can properly fix their multi-threaded optimizations.

But on the flip-side of things? Call of Duty: Advanced Warfare and Alien: Isolation are properly multi-threaded optimized which is awesome... yet both those titles seem to have issues with multi-GPU sadly... oh well, can't win them all lol. Most people have multi-threaded CPUs over Multi-GPU systems so maybe their goal was to support the most common hardware setups and multi-GPU was only an afterthought...

Share this post


Link to post
Share on other sites

Heyyo,

 

We're in the year 2015 and there's still a lot of games that don't even use full quad cores... The Intel Core2Quad Q6600 came out in 2006... Almost ten years later? there's still a bunch of games that are single-threaded lol fml

And even worse, multithread processing tends to lock some threads if others aren't finished.

Until software engineers find a way to prevent this pointless iddling, multicore won't really take off.

(and fucksakes WarGaming!! looking at you especially with World of Tanks!)... but hey, it definitely is getting better.

Come on. Give the bielorussians a break! It took them 4 years of stronk programming to move sound to a second thread.

Wait for Havok to run on a third core by 2018, and probably stronk physiks by 2025 on a fourth.

Share this post


Link to post
Share on other sites

Heyyo,

 

And even worse, multithread processing tends to lock some threads if others aren't finished.

Until software engineers find a way to prevent this pointless iddling, multicore won't really take off.

Come on. Give the bielorussians a break! It took them 4 years of stronk programming to move sound to a second thread.

Wait for Havok to run on a third core by 2018, and probably stronk physiks by 2025 on a fourth.

You know what's ironic about multicore optimizations? Gas Powered Games.

Wargaming bought out GPG and thus their game engine (Despair Engine)... ever played a Despair Engine game? Supreme Commander runs AMAZING on PC. It uses multicore very efficiently last I remembered too...

http://www.hardocp.com/article/2007/03/26/supcom_intel_core_2_quad_gameplay_advantages#.VM_172jF98E

and...

http://www.hardocp.com/article/2007/03/26/supcom_intel_core_2_quad_gameplay_advantages/3#.VM_2JmjF98E

... SO WHY AREN'T THEY JUST USING THIS GAME ENGINE OR USING IT TO FIX BIGWORLD ENGINE!?!? stronk Belorussan mentality. :(

I could care less about Havok... I want my framerates... pretty sure anyone with a CPU since 2006 would enjoy some stronk multithreaded optimizations as it would give their whole player base a performance boost....

Oh... and the Despair Engine works great on World of Tanks... WoT Xbox 360 Edition uses Despair Engine for rendering and BigWorld Server Engine for everything else... So... wtf...

Despair Engine runs on Consoles, PCs and Smartphones... so I bet even WoT Blitz would run better if it used Despair Engine to make better use of the Quad Core Qualcomm CPU in my HTC One M7...

*grumbles*

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now