Has anyone tested quality of hardware HEVC encoders lately? At work we need realtime video encoding, but last time we did our research, output from hw HEVC encoders was barely comparable to software AVC encoders set to "fast" (=low quality, so we could process them in realtime).
"Software encoding" (specifically x264, and to an extent the relatively newer x265) are the best software libraries when encoding live video to H.264 encoded content.
I really don't like the terms "hardware encoders" and "software encoders" because in either instance, both hardware and software are involved. In other words, "hardware encoders" use dedicated encoding chips (hardware) baked into other hardware (CPU/GPU) to offload otherwise intensive operations, and use dedicated (often proprietary) software to do these processes. It's the same thing with "software encoders" which use CPU resources to process these intensive operations, and use software libraries such as x264 to do these processes.
My post is relevant, but because your intelligence prevents you from reread the post to understand the message, I'll spoonfeed you again. I've made it clear that the best quality option is x264, quite clearly, so here goes again.
Intel's QuickSync has poorer quality. (Reasonably quick, but prone to lose frames and to have blocky video artefacts.) AMD's Video Coding Engine (VCE) is the same story. Nvidia's Shadowplay is the same story.
To date, all these "hardware encoding" options have used an underpowered purpose-built chip running proprietary software to handle the encoding. Meanwhile, the power of open-source software (x264) and traditional "software encoding" allows the community to make truly high quality video encoding possible. That being said, most livestreaming is usually done on the "Fastest" setting in Xsplit, OBS, etc for minimal delay to the stream, even with powerful hardware, but if you don't mind delay and want to record high-quality local copies of the video file, some people use slower presets.
Also, instead of asking Anandtech comments, which are a festering pile of arguments and nonsense, you can also Google "x264 quicksync shadowplay vce comparison" and judge the quality for yourself.
Oh and by the way, nobody's entitled to having someone else digest easily researchable information and feed you like a mother bird in the comments section of an article.
Using your words - is "your intelligence preventing you" understanding the meaning of the words "lately" and "test"? In original post you gave me your opinion (!= test). And in this one you gave me extremely helpful suggestion "go google" (if you did it yourself, you would find problem with the word "lately"). Then you ranted about how it is implemented - everyone who regularly reads AT knows how, but everyone else understood that it is not relevant if someone is looking for a recent test.
There are proprietary AVC codecs which are superior to x264, x264 just happens to be the most easily integrated with most people's workflows. There are hardware ASICs for AVC encoding which can't really be called software codecs because they are heavily dependent on the hardware that they are coded for where as most software codecs are using more generic mathematics perhaps optimised by the encoder. And bindings to hardware such as QuickSync show problems because they are hardware encoder blocks which simply aren't very good.
"From a revenue and profit perspective, Intel’s goal is to sell high-end, high margin E5/E7 parts which can do similar things but cost up to 10x. By offering server level eDRAM parts at consumer prices when there is no competition in that space could drive potential customers for cheaper options, lowering Intel’s potential, and why we only see Iris Pro on quad core ECC-enabled processors at this time. There are a few Iris Pro enabled SKUs at the consumer space, but as mobile parts or for mini/all-in-one machines, rather than full blown gaming systems or workstations."
Translation: Intel are committed to the cause of being asshats about this. AMD really needs to hit it out of the park with Zen.
Yeah, me too. I'd love to offload twitch streaming, but if intel continues to be an idiot about it...
That is, if they can get QuickSync to work properly. I mean, my current i7-4770K, it simply dies in seconds if you set a 1080p usable bitrate, and is simply not stable enough even on a twitch-friendly bandwidth.
You do realize you can offload twitch streaming with a dedicated streaming PC, with a dedicated video capture card and its own high-thread CPU running a variant of an x264 supported broadcasting client (Xsplit, OBS, etc), right?
Buy another PC? LOL. I thought the point of these recent processors (and Broadwell-E with the high core count) was that you were supposed to have enough cores to do everything on one PC.
Although with some of the Broadwell-E prices buying a 2nd PC would likely be cheaper.
Same here, i have a new camera with 100 mega bit, streaming, n just got internet speed capable of uploading / streaming this value, need a chip like this on a dedicated server, going to look at the software package carefully
Isn't E3-1585 v5 probably the x86/64 CPU with the highest singlethread performance and ECC support? (Up-to date microarchitecture + highest turbo in this generation + large cache)
This article is a tad confusing: mobile members of the Xeon E3-1500 v5 family have been out for a couple quarters and are in current products. This announcement is for server products. What's the difference between mobile and server? Compare E3-1545M (mobile) with E3-1565L (low-power single-socket server). Intel Ark lists them as using the same socket. Despite the higher number, the server part is lower power and lower frequency than the mobile part. Maybe different parts of the auxiliary circuits are faulty/disabled, e.g., maybe the server part doesn't implement WiDi or other video output. The Ark data looks incomplete, e.g., the mobile data does not list the number of GPU execution units, even though both are P580 with the same frequencies and address range.
Any news about affordable Iris Pro parts? A lot of both mobile processors announced, with Wikipedia saying Q1 for the mobile parts. But apart from the Skull Canyon NUC and some very expensive mobile questions, there are no motherboards, notebooks, desktops announced that are based on these. I hope it won't turn out like previous Iris Pro generations, when it was hard to find these chips outside Apple products.
"Traditionally there are three ways to do this: raw CPU horsepower, FPGAs, custom fixed-function ASICs, or GPUs."
That looks like 4 to me.
Also, your table lists the E3-1558L v5 as a 35W part, but the slide from Intel shows the E3-1565L as the 35W part - which is correct? 1558L makes more sense based on the specs.
The last Intel HW HEVC decoder, it was barely better then x264. I am wondering under what scenario will this trade off make sense. The only thing i could think of is AirPlay.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
qap - Tuesday, May 31, 2016 - link
Has anyone tested quality of hardware HEVC encoders lately? At work we need realtime video encoding, but last time we did our research, output from hw HEVC encoders was barely comparable to software AVC encoders set to "fast" (=low quality, so we could process them in realtime).JoeyJoJo123 - Tuesday, May 31, 2016 - link
"Software encoding" (specifically x264, and to an extent the relatively newer x265) are the best software libraries when encoding live video to H.264 encoded content.I really don't like the terms "hardware encoders" and "software encoders" because in either instance, both hardware and software are involved. In other words, "hardware encoders" use dedicated encoding chips (hardware) baked into other hardware (CPU/GPU) to offload otherwise intensive operations, and use dedicated (often proprietary) software to do these processes. It's the same thing with "software encoders" which use CPU resources to process these intensive operations, and use software libraries such as x264 to do these processes.
qap - Tuesday, May 31, 2016 - link
Great. You have proven how widely used technical terms bothers you. Now - do you have anything relevant about the question I asked?JoeyJoJo123 - Tuesday, May 31, 2016 - link
My post is relevant, but because your intelligence prevents you from reread the post to understand the message, I'll spoonfeed you again. I've made it clear that the best quality option is x264, quite clearly, so here goes again.Intel's QuickSync has poorer quality. (Reasonably quick, but prone to lose frames and to have blocky video artefacts.)
AMD's Video Coding Engine (VCE) is the same story.
Nvidia's Shadowplay is the same story.
To date, all these "hardware encoding" options have used an underpowered purpose-built chip running proprietary software to handle the encoding. Meanwhile, the power of open-source software (x264) and traditional "software encoding" allows the community to make truly high quality video encoding possible. That being said, most livestreaming is usually done on the "Fastest" setting in Xsplit, OBS, etc for minimal delay to the stream, even with powerful hardware, but if you don't mind delay and want to record high-quality local copies of the video file, some people use slower presets.
Also, instead of asking Anandtech comments, which are a festering pile of arguments and nonsense, you can also Google "x264 quicksync shadowplay vce comparison" and judge the quality for yourself.
Oh and by the way, nobody's entitled to having someone else digest easily researchable information and feed you like a mother bird in the comments section of an article.
qap - Wednesday, June 1, 2016 - link
Using your words - is "your intelligence preventing you" understanding the meaning of the words "lately" and "test"? In original post you gave me your opinion (!= test). And in this one you gave me extremely helpful suggestion "go google" (if you did it yourself, you would find problem with the word "lately").Then you ranted about how it is implemented - everyone who regularly reads AT knows how, but everyone else understood that it is not relevant if someone is looking for a recent test.
JoeyJoJo123 - Wednesday, June 1, 2016 - link
Dude, seriously?You got your answer, why are you still complaining about the answer that was provided to you, free of charge?
bobdvb - Monday, June 6, 2016 - link
There are proprietary AVC codecs which are superior to x264, x264 just happens to be the most easily integrated with most people's workflows. There are hardware ASICs for AVC encoding which can't really be called software codecs because they are heavily dependent on the hardware that they are coded for where as most software codecs are using more generic mathematics perhaps optimised by the encoder. And bindings to hardware such as QuickSync show problems because they are hardware encoder blocks which simply aren't very good.bobdvb - Monday, June 6, 2016 - link
http://www.ambarella.com/products/broadcast-infras...azrael- - Tuesday, May 31, 2016 - link
They lost me at "soldered down BGA". If they had been socketed I'd very much have considered building a system around one of them.r3loaded - Tuesday, May 31, 2016 - link
"From a revenue and profit perspective, Intel’s goal is to sell high-end, high margin E5/E7 parts which can do similar things but cost up to 10x. By offering server level eDRAM parts at consumer prices when there is no competition in that space could drive potential customers for cheaper options, lowering Intel’s potential, and why we only see Iris Pro on quad core ECC-enabled processors at this time. There are a few Iris Pro enabled SKUs at the consumer space, but as mobile parts or for mini/all-in-one machines, rather than full blown gaming systems or workstations."Translation: Intel are committed to the cause of being asshats about this. AMD really needs to hit it out of the park with Zen.
hojnikb - Tuesday, May 31, 2016 - link
The die is simply too long (4core/HD580 one anyway) to fit into a 1151 socket.MrSpadge - Tuesday, May 31, 2016 - link
I'm not convinced this is true, but if they wanted to provide this CPU for socket 1151 they could easily have rearranged those elements.nagi603 - Tuesday, May 31, 2016 - link
Yeah, me too. I'd love to offload twitch streaming, but if intel continues to be an idiot about it...That is, if they can get QuickSync to work properly. I mean, my current i7-4770K, it simply dies in seconds if you set a 1080p usable bitrate, and is simply not stable enough even on a twitch-friendly bandwidth.
JoeyJoJo123 - Tuesday, May 31, 2016 - link
You do realize you can offload twitch streaming with a dedicated streaming PC, with a dedicated video capture card and its own high-thread CPU running a variant of an x264 supported broadcasting client (Xsplit, OBS, etc), right?stephenbrooks - Wednesday, June 1, 2016 - link
Buy another PC? LOL. I thought the point of these recent processors (and Broadwell-E with the high core count) was that you were supposed to have enough cores to do everything on one PC.Although with some of the Broadwell-E prices buying a 2nd PC would likely be cheaper.
beginner99 - Tuesday, May 31, 2016 - link
Exactly. Broadwell-e is fail and not releasing socketed skylake with iris pro is another fail. They lost the mobile war...MrSpadge - Tuesday, May 31, 2016 - link
Yes, because Broadwell-E and socketed CPUs are essential for mobile...BOMBOVA - Monday, September 26, 2016 - link
Same here, i have a new camera with 100 mega bit, streaming, n just got internet speed capable of uploading / streaming this value, need a chip like this on a dedicated server, going to look at the software package carefullysatai - Tuesday, May 31, 2016 - link
Isn't E3-1585 v5 probably the x86/64 CPU with the highest singlethread performance and ECC support? (Up-to date microarchitecture + highest turbo in this generation + large cache)MrSpadge - Tuesday, May 31, 2016 - link
Yep. The 6700K may beat it, depending on how the application benefits from the L4$, but doesn't have ECC.vastac13 - Tuesday, May 31, 2016 - link
I see what you did there, "World Domination" LOLConsidering E7-8800 v3 can be multi-socketed up to 144 cores in total per board, theoretically and in paper.
LukaP - Tuesday, May 31, 2016 - link
Youve had me at "E7-8800, Suited for World Domination".MrSpadge - Tuesday, May 31, 2016 - link
+1Gc - Tuesday, May 31, 2016 - link
This article is a tad confusing: mobile members of the Xeon E3-1500 v5 family have been out for a couple quarters and are in current products. This announcement is for server products. What's the difference between mobile and server? Compare E3-1545M (mobile) with E3-1565L (low-power single-socket server). Intel Ark lists them as using the same socket. Despite the higher number, the server part is lower power and lower frequency than the mobile part. Maybe different parts of the auxiliary circuits are faulty/disabled, e.g., maybe the server part doesn't implement WiDi or other video output. The Ark data looks incomplete, e.g., the mobile data does not list the number of GPU execution units, even though both are P580 with the same frequencies and address range.Drazick - Tuesday, May 31, 2016 - link
I want to see Extreme Edition CPU (Above 90W TDP) + 6 / 8 Cores + Iris Pro + 128MB eDRAM + 4 Memory Channels.Intel, please give it to us.
CSMR - Tuesday, May 31, 2016 - link
Any news about affordable Iris Pro parts? A lot of both mobile processors announced, with Wikipedia saying Q1 for the mobile parts. But apart from the Skull Canyon NUC and some very expensive mobile questions, there are no motherboards, notebooks, desktops announced that are based on these. I hope it won't turn out like previous Iris Pro generations, when it was hard to find these chips outside Apple products.neo_1221 - Tuesday, May 31, 2016 - link
"Traditionally there are three ways to do this: raw CPU horsepower, FPGAs, custom fixed-function ASICs, or GPUs."That looks like 4 to me.
Also, your table lists the E3-1558L v5 as a 35W part, but the slide from Intel shows the E3-1565L as the 35W part - which is correct? 1558L makes more sense based on the specs.
iwod - Wednesday, June 1, 2016 - link
The last Intel HW HEVC decoder, it was barely better then x264.I am wondering under what scenario will this trade off make sense. The only thing i could think of is AirPlay.