-
Memory management bug in Intel CPUs threatens massive performance hits.
Posted by
Ampera
on 03 Jan, 2018 13:47
-
https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/Lovely for people like me who run an i7-4790k.
Curious to know what this crippling big actually is, but from what was described, I'm about ready to join with the rest of the Intel users here in giving Intel a collective backhand slap to the head.
So the question is now what sort of performance hits could we be seeing...
-
#1 Reply
Posted by
stj
on 03 Jan, 2018 14:33
-
it's a memory management "bug" that should stop regular code seeing kernel workings - in simple terms.
if fixed in the way they are sugesting the hit will be absolutly huge.
they are talking about flushing the cpu cache every time a user-space thread makes a system call.
keep in mind the cache is what makes a difference between a celeron and a xeon!!!
-
#2 Reply
Posted by
bd139
on 03 Jan, 2018 14:51
-
-
#3 Reply
Posted by
Jeroen3
on 03 Jan, 2018 15:03
-
-
#4 Reply
Posted by
wraper
on 03 Jan, 2018 15:23
-
As fix, they reset the Translation lookaside buffer each context switch.
The linux devs called it FUCKWIT
Forcefully Unmap Complete Kernel With Interrupt Trampolines
-
#5 Reply
Posted by
Avacee
on 03 Jan, 2018 15:36
-
Short-term can expect Intel's share price to drop - especially when the class action law suits start.
But I can't help but wonder if Intel's share price will go up in the mid-term as lots of people replace/upgrade their CPU's :p
-
#6 Reply
Posted by
dr.diesel
on 03 Jan, 2018 15:47
-
Looks like a fix might take a branch prediction rework. If so, not trivial, and if a fix wasn't already in the works, something like this could take quite some time to fix.
-
#7 Reply
Posted by
bd139
on 03 Jan, 2018 16:14
-
-
#8 Reply
Posted by
JoeO
on 03 Jan, 2018 16:17
-
And to think that just today I was reading a post here on the EEVBLOG about how great Intel's processors are compared to AMD's.
-
#9 Reply
Posted by
bd139
on 03 Jan, 2018 16:30
-
They're both as shit as each other. AMD has had a fair number of problems too. If that makes you feel better
Really this has been on the cards for a number of years. All current Intel (and AMD) CPUs are pretty much emulators. They are actually a crazy hyper-pipelined RISC microcoded virtual machines that happens to run x86 and x86-64 instructions. The problem here is that most of the bugs you can fix by changing the virtual machine implementation (microcode) but this one is actually in the physical virtual machine implementation at the bottom of the pile of turds. They hire hoards of design verification engineers to make sure that there are no holes in the native and virtual execution environments but this one slipped through. Actually quite a few have slipped through causing everything from random process crashes to big security holes like ASLR bypass.
And this is what happens people when you layer abstractions so deep and so complicated that you require several volumes of books to just explain the ISA and to maintain backwards compatibility to what is fundamentally some crack smoke inspired architecture from the late 1970s.
I hope the hell POWER wins some fans out of this.
-
#10 Reply
Posted by
Ampera
on 03 Jan, 2018 17:01
-
It's still a case of sit back and see how bad shit gets. Hope for the best, expect the worst.
-
#11 Reply
Posted by
dr.diesel
on 03 Jan, 2018 18:28
-
-
#12 Reply
Posted by
bd139
on 03 Jan, 2018 18:42
-
To be clear the problem doesn’t affect ARM as far as anyone knows but the architectural change in Linux is being applied as a “defence in depth” strategy.
-
#13 Reply
Posted by
dr.diesel
on 03 Jan, 2018 18:46
-
-
#14 Reply
Posted by
Mr. Scram
on 03 Jan, 2018 18:53
-
This is a pretty big one and they can't fix it with microcode either.
Interested to see real world load changes before everyone shits the bed however. Either way it's going to cost us a percentage more, particularly on AWS.
Intel CEO knew something was going down as well: https://www.fool.com/investing/2017/12/19/intels-ceo-just-sold-a-lot-of-stock.aspx
He wouldn't be that stupid, right? That's how you get torn apart by investigators or even go to jail.
-
#15 Reply
Posted by
bd139
on 03 Jan, 2018 18:55
-
-
#16 Reply
Posted by
Lightages
on 03 Jan, 2018 18:58
-
So do I start shopping for a Threadripper right now? Do I disable W7 updates until I get something that isn't going to pulled back to 2010 levels of performance?
-
#17 Reply
Posted by
bd139
on 03 Jan, 2018 19:11
-
I would sit down and do nothing for now and see what happens. Most of the embargoes are only lifted tomorrow with patches as well so time will tell.
-
#18 Reply
Posted by
Gyro
on 03 Jan, 2018 19:19
-
This is a pretty big one and they can't fix it with microcode either.
Interested to see real world load changes before everyone shits the bed however. Either way it's going to cost us a percentage more, particularly on AWS.
Intel CEO knew something was going down as well: https://www.fool.com/investing/2017/12/19/intels-ceo-just-sold-a-lot-of-stock.aspx
He wouldn't be that stupid, right? That's how you get torn apart by investigators or even go to jail.
Combined with this quote from the link that dr.diesel posted...
Microsoft has been testing the Windows updates in the Insider program since November,
It does look dangerously close to insider trading.
-
#19 Reply
Posted by
Ampera
on 03 Jan, 2018 20:23
-
I am hearing anecdotal claims that the effect isn't as bad in 3D workloads as claimed, but it's still yet to be seen.
I don't think it will affect the consumer or even generic power user as much as people who work with hypervizors.
It's definitely a dancing day for AMD, though. With AMD back in the game, who knows if this is going to sink Intel's 5-6 year strong lead.
-
#20 Reply
Posted by
Mr. Scram
on 03 Jan, 2018 20:30
-
I am hearing anecdotal claims that the effect isn't as bad in 3D workloads as claimed, but it's still yet to be seen.
I don't think it will affect the consumer or even generic power user as much as people who work with hypervizors.
It's definitely a dancing day for AMD, though. With AMD back in the game, who knows if this is going to sink Intel's 5-6 year strong lead.
AMD was lagging a single digit percentage in performance, but if these percentages turn out to be correct AMD might very well lead by the same margin. I loathe to think what discussions this will cause amongst the fanboys on either side.
-
#21 Reply
Posted by
tszaboo
on 03 Jan, 2018 20:36
-
Are you people crazy? It affects Virtual machines that can read from each other. It only affects you, if you are running more than 1 virtual machines on your PC server, and one would run malicious code, specifically designed to attack the other virtual machine. This is only an issue for cloud providers.
99.9999% of PC users are not affected.
-
#22 Reply
Posted by
Monkeh
on 03 Jan, 2018 20:40
-
Are you people crazy? It affects Virtual machines that can read from each other. It only affects you, if you are running more than 1 virtual machines on your PC server, and one would run malicious code, specifically designed to attack the other virtual machine. This is only an issue for cloud providers.
99.9999% of PC users are not affected.
.. no, no, that isn't it.
This is an issue which can potentially allow an unprivileged user-mode process to read kernel memory.
-
#23 Reply
Posted by
pigrew
on 03 Jan, 2018 20:45
-
Do VM hypervisors normally allow multiple VMs to execute simultaneously (by dividing up cores)?
-
#24 Reply
Posted by
Monkeh
on 03 Jan, 2018 20:49
-
Do VM hypervisors normally allow multiple VMs to execute simultaneously (by dividing up cores)?
Sure. Or you'd have a VM with one core assigned blocking the whole shebang.
-
-
Are you people crazy? It affects Virtual machines that can read from each other. It only affects you, if you are running more than 1 virtual machines on your PC server, and one would run malicious code, specifically designed to attack the other virtual machine. This is only an issue for cloud providers.
99.9999% of PC users are not affected.
Nope. The ASLR leak has been demonstrated from Javascript so any code running from a web page you have visited can exploit MMU timing to resolve the address of kernel mode data structures and subsequently it just needs an exploit for buffer overflow etc or rewriting the stack return address and you are pwned. But ignorance is bliss.
https://www.vusec.net/projects/anc/
-
-
Is this even an issue for standalone PCs ?
-
#27 Reply
Posted by
dr.diesel
on 03 Jan, 2018 20:56
-
-
#28 Reply
Posted by
Monkeh
on 03 Jan, 2018 20:57
-
Is this even an issue for standalone PCs ?
Yes - your applications aren't meant to be able to find the kernel, let alone read it.
-
#29 Reply
Posted by
Ampera
on 03 Jan, 2018 21:00
-
-
#30 Reply
Posted by
PA0PBZ
on 03 Jan, 2018 21:00
-
Intel's PR response:
https://newsroom.intel.com/news/intel-responds-to-security-research-findings/
Look what they did there:
Intel is committed to product and customer security and is working closely with many other technology companies, including AMD, ARM Holdings and several operating system vendors, to develop an industry-wide approach to resolve this issue promptly and constructively.
-
#31 Reply
Posted by
Mr. Scram
on 03 Jan, 2018 21:04
-
This is rather interesting. I read that this only affects Intel chips, yet Intel is stating it affects AMD and Acorn chips as well.
They don't seem to be actually saying this. Just conveniently mentioning it together.
Obviously, Intel is in full damage control mode right now. This might be the moment they lose the crown to AMD, especially considering they've taken a few hits in the recent past. There is no way they wouldn't downplay the issue on their side and attempt to shift the focus elsewhere.
Regardless of which party you like more, Intel has shown to be very shrewd and ruthless when it comes to marketing again and again.
-
#32 Reply
Posted by
langwadt
on 03 Jan, 2018 21:05
-
-
#33 Reply
Posted by
Ampera
on 03 Jan, 2018 21:09
-
Darn me and my inability to read.
Yeah, this does seem like Intel is starting to freak out. Good.
AMD should take the x86 helm and keep it. Intel has been being a dick about things for way too long. Before now there just wasn't another alternative.
-
#34 Reply
Posted by
Cerebus
on 03 Jan, 2018 21:15
-
Intel's PR response:
https://newsroom.intel.com/news/intel-responds-to-security-research-findings/
This is rather interesting. I read that this only affects Intel chips, yet Intel is stating it affects AMD and Acorn chips as well.
To quote
Mandy Rice-Davies "Well, 'e would [say that], wouldn't he?".
There is no currently extant evidence that this problem affects anyone else, just Intel.
Although the "official" explanation isn't out yet what the problem appears to be is: On Intel's chips that support speculative execution, tests for whether a privilege violation has taken place are delayed until
retirement of speculative executions. Thus, say, a speculative read of kernel space by a user process can actually retrieve results from kernel space before being 'caught' by a privilege violation exception rather than being prevented from making the access in the first place. Quite how one exploits that to grab the accessed information before the exception takes place is the tricky bit, but the process of catching the violation after is has actually taken place, as opposed to preventing the violation taking place, is clearly flawed by design.
-
#35 Reply
Posted by
Ampera
on 03 Jan, 2018 21:16
-
I thought speculative execution was a P5 feature, unless I am thinking of something else.
If that is the case I don't own an Intel chip without it.
-
#36 Reply
Posted by
MT
on 03 Jan, 2018 21:17
-
https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/
Think of the kernel as God sitting on a cloud, looking down on Earth. It's there, and no normal being can see it, yet they can pray to it.
And those who have more then a decade old CPU and not religious are safe? This time we can actually see and poke god in his/her eye!
-
#37 Reply
Posted by
JoeO
on 03 Jan, 2018 21:23
-
-
#38 Reply
Posted by
bd139
on 03 Jan, 2018 21:27
-
Some comments and anger.
----
Intel quote "Intel believes its products are the most secure in the world and that, with the support of its partners, the current solutions to this issue provide the best possible security for its customers."Fucking bollocks.
https://danluu.com/cpu-bugs/ - Intel have a very bad security record when it comes to microcode, Intel ME, horrible Atom bugs, FDIV etc. That's just basically someone saying
"hey chaps Chernobyl wasn't all that bad! We've still got the best nuclear power plant in the world. Can you poke the cameras the other way please, away from the blue glow in the windows from the Cherenkov radiation"Also going back to 2007 (!) Theo de Raadt on x86:
https://marc.info/?l=openbsd-misc&m=119318909016582IA32 and x86-64 are piles of rancid shit. There are people who have left Intel now shitposting about how bad their design verification was and the management team pushing "velocity" because they got urinated upon in the mobile sector
----
Intel quote 2: "Intel is committed to product and customer security and is working closely with many other technology companies, including AMD, ARM Holdings and several operating system vendors, to develop an industry-wide approach to resolve this issue promptly and constructively."Fuck them in the ass. What a bunch of spin doctoring cunts. They have literally zero honour dragging AMD and ARM into this. I would be fucking pissed. This could hurt stock and reputation for a potential non issue. That's just evil.
----
Are you people crazy? It affects Virtual machines that can read from each other. It only affects you, if you are running more than 1 virtual machines on your PC server, and one would run malicious code, specifically designed to attack the other virtual machine. This is only an issue for cloud providers.
99.9999% of PC users are not affected.
As alluded to in a previous post, this is a privilege escalation bug which allows kernel memory to be read by user processes. There is a proof of concept already demonstrated which is enough to pull vectors out of kernel RAM. This entirely defeats ASLR and entirely deletes the privsep implementation of x86-64. Virtualization via VT-x is another layer of abstraction above this and we don't know exactly how that is affected yet. The big worry with this and me is VMCS shadowing. I am almost 100% sure this is a turd which is going to fall under this. Maybe not for a few months yet. If you look at how the EPT / TLB implementation is, adding IOMMU and virtualization support, it's difficult to consolidate exactly how the hell it all fits together. It's that complicated that it's like the film "the Cube". I don't think any engineering team be it forward engineering or design validation can actually rationally test the whole thing.
Ugh this is a nightmare just unfolding.
-
#39 Reply
Posted by
Decoman
on 03 Jan, 2018 21:28
-
I think it is fair to assume that every Intel cpu has "NSA inside" with Intel's management engine. :|
Personally, I think owning a computer these days is just a horror show. No privacy, bad security, bad software and what I like to think of as being the police state (what people call 'surveillance state').
Afaik, any catastrophic security flaw involving the management engine has been expected for quite some time now.
-
#40 Reply
Posted by
tszaboo
on 03 Jan, 2018 21:50
-
Are you people crazy? It affects Virtual machines that can read from each other. It only affects you, if you are running more than 1 virtual machines on your PC server, and one would run malicious code, specifically designed to attack the other virtual machine. This is only an issue for cloud providers.
99.9999% of PC users are not affected.
.. no, no, that isn't it.
This is an issue which can potentially allow an unprivileged user-mode process to read kernel memory.
Then either they have more than 1 issue at the same time, or IDK what is going on.
https://www.techpowerup.com/240174/intel-secretly-firefighting-a-major-cpu-bug-affecting-datacentersThe vulnerability lets users of a virtual machine (VM) access data of another VM on the same physical machine (a memory leak).
Anyway, others write that all x86 is affected, even ARM (sounds bullshit, but possible). We see it in about a week, until that it is all a speculation.
-
#41 Reply
Posted by
bd139
on 03 Jan, 2018 21:56
-
Then either they have more than 1 issue at the same time, or IDK what is going on.
Some people are responding to Xen hypervisor embargoed XSA-253:
https://xenbits.xen.org/xsa/ ...
I am currently hating that dealing with all this shit is my hat. I should have migrated to China in the late 1990s and done EE there
Also Amazon have started randomly rebooting AWS instances now, probably applying patches. Fun fun fun for me over the next few days.
-
-
These look like extra juicy news. I'm looking forward to seeing how this unrolls.
-
#43 Reply
Posted by
Macbeth
on 03 Jan, 2018 21:59
-
maybe the Intel guys fixing the bug is overly cautious, or they just don't want AMD to have an advantage, but ...
https://lkml.org/lkml/2017/12/27/2
PMSL
if (c->x86_vendor != X86_VENDOR_AMD)
setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
Enough said. But shit all my CPU's are Intel right now...
-
#44 Reply
Posted by
Ampera
on 03 Jan, 2018 22:01
-
In essence, it's half the people in the industry collectively shitting their pants and the other half waiting to see how bad it stinks.
maybe the Intel guys fixing the bug is overly cautious, or they just don't want AMD to have an advantage, but ...
https://lkml.org/lkml/2017/12/27/2
PMSL
if (c->x86_vendor != X86_VENDOR_AMD)
setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
Enough said. But shit all my CPU's are Intel right now...
Lol.
I own one Intel CPU that's affected, my main i7-4790k. All others are either AMD or too old to be affected (Pentium 3, Pentium 4, Pentium Pro, basically, I'm all about the Pentiums).
CPUs get more annoying by the day. It's a battle with no victor between people who wish to break down computers for profit and people who want to keep people secure. What it comes down to is people don't care about security, they care about a fast, simple, speedy device, with little else.
-
#45 Reply
Posted by
bd139
on 03 Jan, 2018 22:02
-
That AMD patch enabled this to happen:
-
#46 Reply
Posted by
amyk
on 03 Jan, 2018 22:27
-
The link posted before,
https://twitter.com/brainsmoke/status/948561799875502080 , is currently the only public demonstration I know, but you can see how it works in general --- if an address has been recently accessed, then it will be in the cache so it will be faster to access than one which hasn't. My guess is that the CPU will do a speculative access and cache the data even if the access turns out to be invalid, altering the timing thereafter.
Intel's response that it's "operating as designed" is because no one ever thought this would be a real problem, and so far it remains to be seen how much of one it really is.
Is this even an issue for standalone PCs ?
Yes - your applications aren't meant to be able to find the kernel, let alone read it.
It depends on what applications you run, and whether you trust them. Obviously if you trust everything running on the CPU, e.g. like in an embedded system, this has little relevance. If you're a cloud provider or user with hardware being shared by dozens if not more users who don't trust each other at all, then it's a big problem.
This also theoretically includes things like Javascript running in browsers, so you need to be careful of
any untrusted code running on your system, but if you don't have any, the situation hasn't changed.
It will be interesting to see what happens...
-
#47 Reply
Posted by
station240
on 03 Jan, 2018 22:31
-
Speculation that Intel will have to repeat the FDIV bug offer of replacement CPUs.
I can't imagine large data center companies like Amazon not demanding replacement silicon, given how huge the CPU hit is for the workaround for this bug.
Given how much shit Apple got in for slowing down CPUs in iPhones with weak batteries, I cannot imagine consumers being too pleased with Intel either.
-
#48 Reply
Posted by
bd139
on 03 Jan, 2018 22:33
-
If they offered replacements it would destroy them entirely. Watch the corporate wriggling over the next few months.
-
#49 Reply
Posted by
Mr. Scram
on 03 Jan, 2018 22:39
-
Speculation that Intel will have to repeat the FDIV bug offer of replacement CPUs.
I can't imagine large data center companies like Amazon not demanding replacement silicon, given how huge the CPU hit is for the workaround for this bug.
Given how much shit Apple got in for slowing down CPUs in iPhones with weak batteries, I cannot imagine consumers being too pleased with Intel either.
Intel never guaranteed performance and the chips still work, so I guess they're off the hook there. A problem with Apple was that they hid it and looked like they were slowing down old hardware to sell new hardware. Intel isn't hiding the problem and not hiding the performance hit.
-
#50 Reply
Posted by
glarsson
on 03 Jan, 2018 22:46
-
-
#51 Reply
Posted by
wraper
on 03 Jan, 2018 23:08
-
Intel's PR response:
https://newsroom.intel.com/news/intel-responds-to-security-research-findings/
This is rather interesting. I read that this only affects Intel chips, yet Intel is stating it affects AMD and Acorn chips as well.
Intel are some huge dicks. They wrote statement in a sleazy way as if it suggests AMD is affected as well but without actually saying so:
Recent reports that these exploits are caused by a “bug” or a “flaw” and are unique to Intel products are incorrect. Based on the analysis to date, many types of computing devices — with many different vendors’ processors and operating systems — are susceptible to these exploits.
Intel is committed to product and customer security and is working closely with many other technology companies, including AMD, ARM Holdings and several operating system vendors, to develop an industry-wide approach to resolve this issue promptly and constructively.
That caused AMD stock dropping a few %, then AMD replied with:
To be clear, the security research team identified three variants targeting speculative execution. The threat and the response to the three variants differ by microprocessor company, and AMD is not susceptible to all three variants. Due to differences in AMD’s architecture, we believe there is a near zero risk to AMD processors at this time. We expect the security research to be published later today and will provide further updates at that time.
-
#52 Reply
Posted by
andersm
on 03 Jan, 2018 23:32
-
The details have now been released at
https://spectreattack.com/. The Meltdown attack, which is more serious at least in the short term, affects only Intel CPUs, while the Spectre attacks probably affect every processor featuring speculative execution.
-
#53 Reply
Posted by
bd139
on 03 Jan, 2018 23:33
-
-
#54 Reply
Posted by
wraper
on 03 Jan, 2018 23:44
-
Variant 1: Bounds check bypass
This section explains the common theory behind all three variants and the theory behind our PoC for variant 1 that, when running in userspace under a Debian distro kernel, can perform arbitrary reads in a 4GiB region of kernel memory in at least the following configurations:
Intel Haswell Xeon CPU, eBPF JIT is off (default state)
Intel Haswell Xeon CPU, eBPF JIT is on (non-default state)
AMD PRO CPU, eBPF JIT is on (non-default state)
Apparently the only AMD which tested to be affected are old models running Linux with non default config.
-
#55 Reply
Posted by
bd139
on 03 Jan, 2018 23:51
-
I'm not sure they actually cover every CPU stepping and architecture with the test cases. What would be nice is a red/green test book of what has and hasn't been tested.
Well looks like I'm in for a late night
-
#56 Reply
Posted by
Koldman
on 04 Jan, 2018 00:48
-
I don't quite understand the whole thing, but I feel like the kid that saved all his paper route money and bought a new bike only for it to fall apart.
-
#57 Reply
Posted by
MT
on 04 Jan, 2018 01:07
-
Soo are Intel huge dicks or not?
-
#58 Reply
Posted by
wraper
on 04 Jan, 2018 01:13
-
Soo are Intel huge dicks or not?
Average
-
#59 Reply
Posted by
MT
on 04 Jan, 2018 01:16
-
So Intel are average dicks!? Soooo what are AMD and ARM?
-
#60 Reply
Posted by
wraper
on 04 Jan, 2018 01:19
-
So Intel are average dicks!? Soooo what are AMD and ARM?
Not enough data yet.
-
#61 Reply
Posted by
MT
on 04 Jan, 2018 01:25
-
Ahhh! so in meantime we just go ape shouting "world end is near"!
-
#62 Reply
Posted by
wraper
on 04 Jan, 2018 01:34
-
https://www.amd.com/en/corporate/speculative-executionVariant One Bounds Check Bypass Resolved by software / OS updates to be made available by system vendors and manufacturers. Negligible performance impact expected.
Variant Two Branch Target Injection Differences in AMD architecture mean there is a near zero risk of exploitation of this variant. Vulnerability to Variant 2 has not been demonstrated on AMD processors to date.
Variant Three Rogue Data Cache Load Zero AMD vulnerability due to AMD architecture differences.
-
#63 Reply
Posted by
wraper
on 04 Jan, 2018 01:47
-
As I understand it from data currently available, AMD is only affected on Linux with non default configuration.
AMD PRO CPU, eBPF JIT is on (non-default state)
-
#64 Reply
Posted by
andersm
on 04 Jan, 2018 01:48
-
-
#65 Reply
Posted by
David Hess
on 04 Jan, 2018 05:00
-
Looks like a fix might take a branch prediction rework. If so, not trivial, and if a fix wasn't already in the works, something like this could take quite some time to fix.
The problem is speculative execution accessing protected memory. The fix would be to fault the speculated instructions before they access memory instead of at retirement which is what AMD does by tagging the TLBs so the speculated memory accesses to protected memory do not occur.
-
#66 Reply
Posted by
David Hess
on 04 Jan, 2018 05:03
-
And this is what happens people when you layer abstractions so deep and so complicated that you require several volumes of books to just explain the ISA and to maintain backwards compatibility to what is fundamentally some crack smoke inspired architecture from the late 1970s.
The problem occurs because of how speculative execution works so it applies to RISC designs as well. ARM is apparently vulnerable to it but AMD is not because they tag and invalidate their TLBs which prevents this very problem.
-
#67 Reply
Posted by
andersm
on 04 Jan, 2018 06:25
-
But AMD CPUs have this Cortex A5 management unit running inside, and since it's part of the security subsystem, I assume it has a higher security clearance. What if the hacker can inject some bad code pieces to the ARM firmware, then using it to attack the ARM, then using the ARM to attach the Zen cores?
The Cortex-A5 is an in-order core, so it is not vulnerable to anything involving speculative execution. Also, these attacks only allow for extracting data, they can't (directly) be used to modify anything.
-
#68 Reply
Posted by
Mr.B
on 04 Jan, 2018 07:30
-
Thank you to all the very knowledgeable low level CPU experts here.
This post is just to acknowledge the community experts and bookmark this thread so that I can follow it easily.
The gravity of this situation intrigues me….
The combination of the immense possible damage to Intel and the resulting fallout in the general computing arena, be it attacks or a resultant processing impact due to an OS level patch, cannot be underestimated IMHO.
-
#69 Reply
Posted by
Jeroen3
on 04 Jan, 2018 07:30
-
If they offered replacements it would destroy them entirely. Watch the corporate wriggling over the next few months.
Since when is bug present? I read apocalyptic headlines saying two decades, but that seems a bit long.
-
#70 Reply
Posted by
Mr. Scram
on 04 Jan, 2018 07:38
-
Since when is bug present? I read apocalyptic headlines saying two decades, but that seems a bit long.
I think I read Sandy Bridge and up, which seems to make some sense from an architectural point of view.
-
#71 Reply
Posted by
bd139
on 04 Jan, 2018 07:53
-
I think this could go a long way back as suggested. Speculative out of order execution goes back to Pentium Pro if I remember correctly. It would be nice to confirm it either way but the effort required is likely extensive.
You have to ask: how long have the security services known about this?
As an example of where this is heading it looks like we’ve already had patches for AWS deployed quietly. No word from some vendors yet on patch status. I suspect some are as surprised as we are.
-
#72 Reply
Posted by
Mr. Scram
on 04 Jan, 2018 08:10
-
I think this could go a long way back as suggested. Speculative out of order execution goes back to Pentium Pro if I remember correctly. It would be nice to confirm it either way but the effort required is likely extensive.
You have to ask: how long have the security services known about this?
As an example of where this is heading it looks like we’ve already had patches for AWS deployed quietly. No word from some vendors yet on patch status. I suspect some are as surprised as we are.
There's a huge load of very critical leaks surfacing lately. If you stack those together, you basically have free reign over almost every computer. Intel ME, the various macOS vulnerabilites where you can get root access without much trouble and a few more.
-
#73 Reply
Posted by
bd139
on 04 Jan, 2018 08:19
-
Yes indeed. It doesn’t look good for the IT business at all. I have, as someone deeply involved in the security side of things, considered cashing everything I have in and bailing. It’s too bloody stressful keeping the snowflakes covered in piss alive (google “programming sucks” for context of that comment).
There’s a bigger one on the cards as well. While this is confined to a single machine we’re actually running short on viable crypto tech at the moment. The cat and mouse game that is played against ciphers, key exchange and transport layer protocols is currently letting the cat doing some serious catching up...
-
#74 Reply
Posted by
JoeN
on 04 Jan, 2018 09:41
-
Is this even an issue for standalone PCs ?
The Spectre attack can be delivered as Javascript which means some site you go to could deliver it and search your memory for something interesting and phone home. The attack is actually pretty slow though, I guess maybe it's not likely to find anything, but it can randomly poke around. Fixing Javascript to disallow it should be easy, though.
https://spectreattack.com/spectre.pdfhttps://meltdownattack.com/meltdown.pdf"The unoptimized code in Appendix A reads approximately 10KB/second on an i7 Surface Pro 3."
The attack is right in this document in C, they don't give a Javascript example, I think for a good reason.
This is Meltdown reading memory from another process:
-
#75 Reply
Posted by
dmills
on 04 Jan, 2018 10:33
-
The cat and mouse game that is played against ciphers, key exchange and transport layer protocols is currently letting the cat doing some serious catching up...
I thought the underlying math was still safeish for all the work being done on number theoretic sieves and the discrete log problem?
Now attacks on protocols and implementations, that has always been the low hanging fruit when breaking these things, between side channel and just plain broken implementations.... I just LOVE people who write their own crypto.
Regards, Dan.
-
-
Is this even an issue for standalone PCs ?
The Spectre attack can be delivered as Javascript which means some site you go to could deliver it and search your memory for something interesting and phone home. The attack is actually pretty slow though, I guess maybe it's not likely to find anything, but it can randomly poke around. Fixing Javascript to disallow it should be easy, though.
"Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code." (from the first .pdf)
They say "portable js code" sort of implying it can break any javascript engine sandbox which is hardly believable because no two OS/browser/browser version/cpu/cpu version combos are the same, have the same js engine, nor produce the same code after jitting, etc. The code they show is hand tweaked javascript "Like other optimized JavaScript engines, V8 performs just-in-time compilation to convert JavaScript into ma- chine language. To obtain the x86 disassembly of the JIT output during development, the command-line tool D8 was used. Manual tweaking of the source code lead- ing up to the snippet above was done to get the value of simpleByteArray.length in local memory (instead of cached in a register or requiring multiple instructions to fetch)." hardly "portable" as they say.
"We wrote a JavaScript program that successfully reads data from the address space of the browser process running it." means they could only read the browser's memory space, which is not good but not the same nor as dangerous as "search your memory for something interesting and phone home".
OTOH, I strongly believe, I have no doubt, that ALL the browsers have, on purpose, some sort of very well hidden backdoor to pwn our computers. The keys are either in Apple/Google/Mozilla/Brave/Opera or in the NSA hands. I don't think either that heartbleed was an accident.
-
#77 Reply
Posted by
bd139
on 04 Jan, 2018 11:03
-
The cat and mouse game that is played against ciphers, key exchange and transport layer protocols is currently letting the cat doing some serious catching up...
I thought the underlying math was still safeish for all the work being done on number theoretic sieves and the discrete log problem?
Now attacks on protocols and implementations, that has always been the low hanging fruit when breaking these things, between side channel and just plain broken implementations.... I just LOVE people who write their own crypto.
Regards, Dan.
At the moment, yes we're safeish but as always, the transition time between safeish and unsafe gets exponentially shorter. There's a lot of progress in quantum computing which I'm keeping one eye on. There's also some of which we probably can't see and is likely well funded. They're only factoring relatively small numbers now (tangibly brute forceable on traditional compute with an eye shut) but the gains are exponential. That could make the discrete log problem trivial or at least affordable. On a decade scale, shit might be hitting the proverbial fan.
Implementations are easy pickings, especially as everything is written in bloody C still. Also look at logjam as well where the implementation was good but a bad assumption was made on the mathematical side of things (shipping same primes everywhere).
Is this even an issue for standalone PCs ?
The Spectre attack can be delivered as Javascript which means some site you go to could deliver it and search your memory for something interesting and phone home. The attack is actually pretty slow though, I guess maybe it's not likely to find anything, but it can randomly poke around. Fixing Javascript to disallow it should be easy, though.
"Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code." (from the first .pdf)
They say "portable js code" sort of implying it can break any javascript engine sandbox which is hardly believable because no two OS/browser/browser version/cpu/cpu version combos are the same, have the same js engine, nor produce the same code after jitting, etc. The code they show is hand tweaked javascript "Like other optimized JavaScript engines, V8 performs just-in-time compilation to convert JavaScript into ma- chine language. To obtain the x86 disassembly of the JIT output during development, the command-line tool D8 was used. Manual tweaking of the source code lead- ing up to the snippet above was done to get the value of simpleByteArray.length in local memory (instead of cached in a register or requiring multiple instructions to fetch)." hardly "portable" as they say.
"We wrote a JavaScript program that successfully reads data from the address space of the browser process running it." means they could only read the browser's memory space, which is not good but not the same nor as dangerous as "search your memory for something interesting and phone home".
OTOH, I strongly believe, I have no doubt, that ALL the browsers have, on purpose, some sort of very well hidden backdoor to pwn our computers. The keys are either in Apple/Google/Mozilla/Brave/Opera or in the NSA hands. I don't think either that heartbleed was an accident.
You may be right. You don't have to look far to find state interference in crypto implementations. Browsers are likely easier targets.
https://en.wikipedia.org/wiki/IPsec#Alleged_NSA_interferencehttps://en.wikipedia.org/wiki/Bullrun_(decryption_program)http://blog.erratasec.com/2013/09/tor-is-still-dhe-1024-nsa-crackable.html... etc etc ...
-
#78 Reply
Posted by
dr.diesel
on 04 Jan, 2018 11:51
-
-
#79 Reply
Posted by
nfmax
on 04 Jan, 2018 11:59
-
I have now turned Javascript OFF in all browsers, until further notice. youTube no longer works. Bye bye, Dave!
-
#80 Reply
Posted by
Rerouter
on 04 Jan, 2018 12:13
-
Dr.Diesel, to better understand, no matter what, 2 of those vulnerabilities are present and unfixable in all affected Intel products, no matter how its patched? or is there ways to avoid it, e.g. the other poster disabling java script.
-
#81 Reply
Posted by
Decoman
on 04 Jan, 2018 12:26
-
From the linked article below some guy (lol, this was my way of trying to reference a quotation about a quotation ) is referenced as having pointing out the following about Intel's Management Engine:
According to Zammit, the ME:
* has full access to memory (without the parent CPU having any knowledge);
* has full access to the TCP/IP stack;
* can send and receive network packets, even if the OS is protected by a firewall;
* is signed with an RSA 2048 key that cannot be brute-forced; and
* cannot be disabled on newer Intel Core2 CPUs.https://www.techrepublic.com/article/is-the-intel-management-engine-a-backdoor/This is the kind of shit that makes me sit here and think I am not really the owner or manager of my own damn computer.
-
#82 Reply
Posted by
bd139
on 04 Jan, 2018 12:30
-
I have now turned Javascript OFF in all browsers, until further notice. youTube no longer works. Bye bye, Dave!
I don't use the browser for youtube!
https://rg3.github.io/youtube-dl/This downloads which are then carted off to my iPhone via VLC and I sit and watch them on the sofa with my headphones on.
I have teenagers and a shitty Internet connection so watching youtube without horrible buffering is off the cards.
This is the kind of shit that makes me sit here and think I am not really the owner or manager of my own damn computer.
You're right. Welcome to serfdom.
Really though, I've got a few Z84C0008 parts, a whole tube of MCM6810P SRAMs, some stripboard and about 50 tubes of TTL ICs here. Build my own shit computer instead!
-
#83 Reply
Posted by
dr.diesel
on 04 Jan, 2018 12:30
-
Dr.Diesel, to better understand, no matter what, 2 of those vulnerabilities are present and unfixable in all affected Intel products, no matter how its patched? or is there ways to avoid it, e.g. the other poster disabling java script.
Patches are out for Meltdown, comes with a varying performance hit, but looks like Spectre will take a hardware fix, though can be made more difficult to exploit via patches.
Disabling java helps prevent a browser/webpage based attack.
This is still developing, and will lead to interesting speculative execution changes for all players, including AMD i'd bet.
-
#84 Reply
Posted by
tszaboo
on 04 Jan, 2018 12:53
-
Are you people crazy? It affects Virtual machines that can read from each other. It only affects you, if you are running more than 1 virtual machines on your PC server, and one would run malicious code, specifically designed to attack the other virtual machine. This is only an issue for cloud providers.
99.9999% of PC users are not affected.
Nope. The ASLR leak has been demonstrated from Javascript so any code running from a web page you have visited can exploit MMU timing to resolve the address of kernel mode data structures and subsequently it just needs an exploit for buffer overflow etc or rewriting the stack return address and you are pwned. But ignorance is bliss.
https://www.vusec.net/projects/anc/
That sounds pretty bad. Also, excecuting data? So any webpage can overtake my PC. Great.
Let's just hope they fix it, the effect is not mayor with normal workload, and they fix Windows 7 also. I dont feel like downgrading my PC to windows 10.
-
#85 Reply
Posted by
bd139
on 04 Jan, 2018 13:04
-
-
#86 Reply
Posted by
Decoman
on 04 Jan, 2018 13:10
-
You're right. Welcome to serfdom.
Well, I have to say it is even worse than that. Given the reach of surveillance and hacking and other terrible things, when nation states targets individuals, the threat is real. I personally don't think I can really travel to USA, nor UK because I have opinions that basically deem these government institutions as being villains. But enough about that. I am confident that I am on some list somewhere, and yet I have done nothing wrong. I never forget that one time some random guy in an irc chat once asked me if I owned a firearm (iirc)and if I was a member of an organization. And the truth was ofc that I had none and weren't in any organization. I like playing Arma 3 (most fun game as multiplayer, but terrible game mechanics, and you can drive ground vehicles and fly helicopters and build bases), and one time, without me even really bringing up any issue at all, this one guy who at one point claimed to be working in the arms industry, suddenly had this urge to start having a personal conversation with me about something vague and talked about causing attention like ripples in the water, and other weird stuff, making me having to now wonder if playing on that one server flagged my other co players in some way. And later when this guy in what I thought was Californian accent (obviously a foreigner) sneaks up on me in this local park and says to me "Don't be scared!" as he passes by on his skateboard, I start to wonder if I ought to get a little paranoid or not.
In the proverbial" perfect world", I am sure I wouldn't be bothered by relying on others for my security, but as it stands today, there is literally nobody to trust the way I see it. Not the local government, certainly not foreign governments, not my browser maker, not even technologists that opine on the matter of the "internet of things", and not all the people that actually work with the design and implementation of anything to do with computers and/or networking and standards. I listened to US congress having a hearing not too long ago about their supposed claims of not being able to read off this one particular mobile phone in a criminal investigation (iirc, after this show and spectacle in that US congress hearing , later it turned out that a company managed to copy the content for the law enforcement), and seeing how a higher Apple representative basically happily bent over and acknowledged the suggestion of discussing the matter further with the committee after the hearing to help out, for me just made any public statements from Apple to the public about how they care about people privacy, now a moot point. Ofc, it should be pointed out that I don't own an Apple product. I don't even own a smart phone, as I have the impression that the new phones aren't very good security wise, and they seem to incorporate various features that acts like streaming user telemetry, which imo would be basically at odds with ones privacy needs.
I am also the kind of guy that repeatedly points out to others that people's notion of 'privacy' tend to be misunderstood. As, it ought to be obvious that the matter at hand would be foremost ones privacy needs, and not as 'a right' as such, which in any case would certainly be limited by the merit of making a definition of privacy, or, just with how the mere expectation of privacy is contested, by simply disallowing expectation of privacy in some arbitrary way.
-
#87 Reply
Posted by
bd139
on 04 Jan, 2018 13:24
-
I can't argue with you. It's the same opinion here.
I work on the grey man principle. Cut your life in two. You have the public life and the private life. The public life is in line with expectations. Your private life is offline, entirely.
You will see me mentioning various things like DaveCAD (pen+paper) and using lots of old rancid analogue equipment. This is done not wholly because I enjoy it, which is fortunate that I do, but because being so close to how things really work that I am scared of it. There needs to be a backup plan away from "network dependency".
-
#88 Reply
Posted by
Mr. Scram
on 04 Jan, 2018 13:27
-
I can't argue with you. It's the same opinion here.
I work on the grey man principle. Cut your life in two. You have the public life and the private life. The public life is in line with expectations. Your private life is offline, entirely.
You will see me mentioning various things like DaveCAD (pen+paper) and using lots of old rancid analogue equipment. This is done not wholly because I enjoy it, which is fortunate that I do, but because being so close to how things really work that I am scared of it. There needs to be a backup plan away from "network dependency".
There is no backup plan. Even if you arrange something, others will forcefully take it from you once it becomes of value.
-
#89 Reply
Posted by
Decoman
on 04 Jan, 2018 13:33
-
I think corporations would be the first to be screwed on a general basis.
So I think it makes sense that if you run an important business and have proprietary data, to be kept secret, having no operational security would be bad if having a more or less open computer network system (or bad practices regarding computer security in general, allowing phishing attacks and the like), or allowed people to just walk around the premises, or even inside your home, and even if you hired people randomly with no background checks at all.
I now am reminded of how thieves will steal the entire safe, if the safe is not nailed down.
It has been said though that locks are only there to slow down trespassers, and not to really prevent entry/theft.
-
#90 Reply
Posted by
bd139
on 04 Jan, 2018 13:38
-
Yes that's the biggest concern for me as well.
I am developing an exit strategy at the moment. I don't want to be around the gigantic turd if it goes up in flames.
-
#91 Reply
Posted by
BravoV
on 04 Jan, 2018 13:44
-
-
#92 Reply
Posted by
Decoman
on 04 Jan, 2018 13:51
-
Its on CNN -> http://money.cnn.com/2018/01/03/technology/computer-chip-flaw-security/index.html
The article states that "Flaws in chips are unusual." I am no expert, but I suspect that this statement is not true as a more objective statement. I've also read that there is a real risk of (any) computer chip being vulnerable to it being doped in a subtle way by an advanced adversary for further manipulating a chip in use, in desired ways.
-
#93 Reply
Posted by
Mr. Scram
on 04 Jan, 2018 13:56
-
I think corporations would be the first to be screwed on a general basis.
So I think it makes sense that if you run an important business and have proprietary data, to be kept secret, having no operational security would be bad if having a more or less open computer network system (or bad practices regarding computer security in general, allowing phishing attacks and the like), or allowed people to just walk around the premises, or even inside your home, and even if you hired people randomly with no background checks at all.
I now am reminded of how thieves will steal the entire safe, if the safe is not nailed down. It has been said though that locks are only there to slow down trespassers, and not to really prevent entry/theft.
We know this to be true when t comes to computers too. Any adversary motivated enough will find a way to gain access. With enough mud thrown, something is bound to stick. You can only make yourself a less interesting target and more painful to hit.
-
#94 Reply
Posted by
Decoman
on 04 Jan, 2018 14:47
-
One thing I've learned about computers, is that it does not matter if the crypto is good, if the implementation is bad. And so, then things get really complicated, and a single wrong character in some piece of code somewhere, can lead to what is called a 'catastrophic failure' with regard to having some expected security.
An important aspect of computer security is probably how allowing physical access to an adversary makes having security more like an impossibility, as the risk of anyone tampering with physical hardware at some location is more like a feature, than a threat model.
-
#95 Reply
Posted by
Avacee
on 04 Jan, 2018 16:12
-
-
-
Doctor Who:
The trouble with computers, of course, is that they're very sophisticated idiots. They do exactly what you tell them at amazing speed, even if you order them to kill you. So if you do happen to change your mind, it's very difficult to stop them obeying the original order, but... not impossible.
-
#97 Reply
Posted by
SaabFAN
on 04 Jan, 2018 18:14
-
Doctor Who:
The trouble with computers, of course, is that they're very sophisticated idiots. They do exactly what you tell them at amazing speed, even if you order them to kill you. So if you do happen to change your mind, it's very difficult to stop them obeying the original order, but... not impossible.
No problem with a TARDIS
Wasn't AMD working on something to replace the x86-Architecture for consumer-computers? I remember reading something like that one or two years back. Would be the perfect time to present the new CPU-Architecture now
-
#98 Reply
Posted by
Ampera
on 04 Jan, 2018 18:34
-
Doctor Who:
The trouble with computers, of course, is that they're very sophisticated idiots. They do exactly what you tell them at amazing speed, even if you order them to kill you. So if you do happen to change your mind, it's very difficult to stop them obeying the original order, but... not impossible.
No problem with a TARDIS
Wasn't AMD working on something to replace the x86-Architecture for consumer-computers? I remember reading something like that one or two years back. Would be the perfect time to present the new CPU-Architecture now
There is so much going on right now in the computing world. New architectures are ALWAYS a great idea. Replacing what everybody is using with a better technology is definitely attractive, but the issue is not only what, but how do we get people to drop their over 35 years of software support on a single platform for something else? Who is going to be able to make enough of a statement for everybody to fight against everybody who WILL want to keep the x86 battleship tanking?
At the moment, there is no consumer oriented processing platform with the same power and app support as x86. ARM has a lot of app support, and POWER has very similar, no pun intended, power, but they just don't mix. I recall watching a computer chronicles episode where they were talking about DEC Alpha, MIPS, and PowerPC machines taking the stage, and asking if the market is going to expand towards them. (It was the episode about the original Pentium if you want to see it) About 25 years later, DEC Alpha is completely dead, MIPS is hard to come by, and PowerPC is completely dead with POWER being resigned to servers and supercomputing tasks.
There have been designs that fix so many problems with x86. Heck, just starting over with x86 and re-implementing a lot of stuff would make the platform WAY better, but the reason why everybody uses x86, and the reason why I can still run the first version of PC-DOS on a Threadripper is because of backwards compatibility with application code. As more and more code is written for x86, we sink deeper into why nobody will change.
-
#99 Reply
Posted by
JoeN
on 04 Jan, 2018 19:24
-
You can use NoScript and leave Javascript turned on for certain sites. I don't think Youtube is going to send you anything malicious.
-
#100 Reply
Posted by
tszaboo
on 04 Jan, 2018 20:59
-
https://www.techpowerup.com/240273/intel-aware-of-cpu-flaws-before-ceo-brian-krzanich-planned-usd-24m-stock-sale Intel CEO Brian Krzanich sold the maximum amount of shares in the company he could, keeping only the mandatory 250,000 minimum shares that come with his position at Intel. In total, Brian Krzanich's sold shares totaled 245,743 shares of stock he owned outright, and 644,135 shares he got from exercising his options. So, the man sold around 80% of his Intel shares while the company (and he himself, surely) knew the flaw would become public knowledge soon enough
Sounds like insider trading to me.
-
#101 Reply
Posted by
David Hess
on 05 Jan, 2018 00:17
-
But AMD CPUs have this Cortex A5 management unit running inside, and since it's part of the security subsystem, I assume it has a higher security clearance. What if the hacker can inject some bad code pieces to the ARM firmware, then using it to attack the ARM, then using the ARM to attach the Zen cores?
This would be a big deal but has nothing to do with the exploits being discussed.
-
#102 Reply
Posted by
dr.diesel
on 05 Jan, 2018 01:09
-
-
#103 Reply
Posted by
Jeroen3
on 05 Jan, 2018 06:47
-
It works... Intel i7-4710MQ
-
#104 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 07:39
-
One thing I've learned about computers, is that it does not matter if the crypto is good, if the implementation is bad. And so, then things get really complicated, and a single wrong character in some piece of code somewhere, can lead to what is called a 'catastrophic failure' with regard to having some expected security.
An important aspect of computer security is probably how allowing physical access to an adversary makes having security more like an impossibility, as the risk of anyone tampering with physical hardware at some location is more like a feature, than a threat model.
Again, like in normal life if they want you, they have you. Obviously, there are many parties out there that collect vast amounts of zero days to use against anyone they please. However, the reality is most of us aren't important enough for zero days. Those are expensive and relatively rare and reserved for state level chess, or as the basis for a large criminal attack. There's bound to be some application or even libraries on your computer you haven't updated and that might be enough. If you somehow dodged that bullet in the most unlikely fashion, there's still social engineering. There are attacks that can catch even very careful people out and if they don't, the customer service of all the services you use aren't so well behaved. You can do everything right and still suffer from someone else's mistakes. There are a couple of well known cases where this happened.
The uncomfortable truth is that when your time has come, you're done. Of course, this applies to regular life too and people prefer to deny that too.
-
#105 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 08:08
-
You can use NoScript and leave Javascript turned on for certain sites. I don't think Youtube is going to send you anything malicious.
They won't risk the fallout from doing so in this case, but I don't put it beyond them to not only index your behaviour when you use their software, but your behaviour elsewhere as well. Like a Facebook button, except it's not just your browsing behaviour, but everything you do on your computer. I realize this could be considered tin foil hatty, but it's been shown again and again that companies will overstep boundaries until the law tells them they can't, and even try to get away with as much as they can.
-
#106 Reply
Posted by
bd139
on 05 Jan, 2018 08:09
-
Indeed. It’s the “better to ask for forgiveness than permission” argument. Doesn’t wash when EU GDPR kicks in. Seriously large damaging fines for pulling that shit.
-
#107 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 08:37
-
Indeed. It’s the “better to ask for forgiveness than permission” argument. Doesn’t wash when EU GDPR kicks in. Seriously large damaging fines for pulling that shit.
The GDPR seems a bit overreaching in some areas, but considering the things that have been going on that might just be what's needed. I just hope it isn't used to slap regular IT companies around, while the big parties dance between the raindrops with impunity.
-
#108 Reply
Posted by
bd139
on 05 Jan, 2018 08:38
-
It's about shafting the big guys and the finance sector. We're having to move a lot of mountains to make it work.
Looking at 17% loss on everything now with the patches:
https://lkml.org/lkml/2018/1/3/281
-
-
I hope the hell POWER wins some fans out of this.
Not sure if anyone caught this, still reading through the other 4 pages...
https://access.redhat.com/security/vulnerabilities/speculativeexecutionPPC is out too. I wouldn't be surprised if newer SPARC was also out since they also do branch prediction and/or out-of-order execution, but c'mon... who owns modern Oracle hardware?
-
#110 Reply
Posted by
bd139
on 05 Jan, 2018 11:38
-
Ah bugger. I had some hopes for POWER.
This dude is still OK!
-
-
Looking at 17% loss on everything now with the patches: https://lkml.org/lkml/2018/1/3/281
Yep. "The impact of this will vary depending on the workload. Every time a program makes a call into the kernel—to read from disk, to send data to the network, to open a file, and so on—that call will be a little more expensive, since it will force the TLB to be flushed and the real kernel page table to be loaded. Programs that don't use the kernel much might see a hit of perhaps 2-3 percent—there's still some overhead because the kernel always has to run occasionally, to handle things like multitasking.
But workloads that call into the kernel a ton will see much greater performance drop off. In a benchmark, a program that does virtually nothing other than call into the kernel saw its performance drop by about 50 percent; in other words, each call into the kernel took twice as long with the patch than it did without. Benchmarks that use Linux's loopback networking also see a big hit, such as 17 percent in this Postgres benchmark. Real database workloads using real networking should see lower impact, because with real networks, the overhead of calling into the kernel tends to be dominated by the overhead of using the actual network"
I wonder if the i5/i7 in a MacBookPro6,1 (running Snow Leopard) is affected by this? Or this only happens on newer cpus?
-
#112 Reply
Posted by
bd139
on 05 Jan, 2018 12:28
-
Snow leopard isn't patched. Only Sierra and High Sierra are.
I just moved two postgres and two nginx nodes over to new kernels. Here we go
-
#113 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 12:37
-
Yep. "The impact of this will vary depending on the workload. Every time a program makes a call into the kernel—to read from disk, to send data to the network, to open a file, and so on—that call will be a little more expensive, since it will force the TLB to be flushed and the real kernel page table to be loaded. Programs that don't use the kernel much might see a hit of perhaps 2-3 percent—there's still some overhead because the kernel always has to run occasionally, to handle things like multitasking.
But workloads that call into the kernel a ton will see much greater performance drop off. In a benchmark, a program that does virtually nothing other than call into the kernel saw its performance drop by about 50 percent; in other words, each call into the kernel took twice as long with the patch than it did without. Benchmarks that use Linux's loopback networking also see a big hit, such as 17 percent in this Postgres benchmark. Real database workloads using real networking should see lower impact, because with real networks, the overhead of calling into the kernel tends to be dominated by the overhead of using the actual network"
I wonder if the i5/i7 in a MacBookPro6,1 (running Snow Leopard) is affected by this? Or this only happens on newer cpus?
Any except the most ancient Intel CPU is affected by this. Whatever the case, unless you have a reason to think you're not affected it's likely you are.
-
-
I wonder if the i5/i7 in a MacBookPro6,1 (running Snow Leopard) is affected by this? Or this only happens on newer cpus?
Any except the most ancient Intel CPU is affected by this. Whatever the case, unless you have a reason to think you're not affected it's likely you are.
Snow leopard isn't patched. Only Sierra and High Sierra are.
Ufff, In practice, then, ~ all the PCs in the world are vulnerable? And I'm going to have to abandon my beloved Snow Leopard? Shit.
-
#115 Reply
Posted by
nfmax
on 05 Jan, 2018 14:08
-
Snow leopard isn't patched. Only Sierra and High Sierra are.
Is Sierra patched already? I haven't seen that stated anywhere else yet. I just moved my 'testbed' MBP to 10.13.2 so I can use email/web and shut down my main iMac until the dust settles a bit. But the older Macs that won't run Sierra? Are they just scrap now? I have two PPC macs which run my music library and drive the scanner that HP couldn't be bothered to support on Lion (!). Ill probably have to set up a seperate airgapped cabled network for them now. To say nothing of my father's MBP which is too old for Sierra.
You would have to be mad to buy a computer or phone of any type now or for the next two or three years, without extreme need - although I gather Raspberry Pis of all versions are not affected.
-
#116 Reply
Posted by
Kalvin
on 05 Jan, 2018 14:08
-
Does this also affect virtualized OSs, like a Linux running in Virtualbox, ie. can the Linux running inside the Virtualbox running on Windows 10 host compromise the Windows 10 host?
-
#117 Reply
Posted by
nfmax
on 05 Jan, 2018 14:12
-
Does this also affect virtualized OSs, like a Linux running in Virtualbox, ie. can the Linux running inside the Virtualbox running on Windows 10 host compromise the Windows 10 host?
Yes
Basically the hardware protection between privilege levels has been demonstrated not to work.
-
#118 Reply
Posted by
wraper
on 05 Jan, 2018 15:05
-
Does this also affect virtualized OSs, like a Linux running in Virtualbox, ie. can the Linux running inside the Virtualbox running on Windows 10 host compromise the Windows 10 host?
Yep it does.
On a side note, AMD introduced RAM encryption in EPYC which basically makes it immune to this.
-
#119 Reply
Posted by
rrinker
on 05 Jan, 2018 15:06
-
Does this also affect virtualized OSs, like a Linux running in Virtualbox, ie. can the Linux running inside the Virtualbox running on Windows 10 host compromise the Windows 10 host?
That is the BIGGEST danger of this and why Microsoft rushed out patching their host systems for their Azure cloud environment ahead of their original planned date.
Mostly without incident but we've had a few customers with issues where things didn't come up cleanly after the host was restarted under their VM.
-
#120 Reply
Posted by
bd139
on 05 Jan, 2018 15:11
-
Yep it does.
On a side note, AMD introduced RAM encryption in EPYC which basically makes it immune to this.
Until someone works out how to read the keys with it
-
#121 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 15:11
-
Does this also affect virtualized OSs, like a Linux running in Virtualbox, ie. can the Linux running inside the Virtualbox running on Windows 10 host compromise the Windows 10 host?
Yep it does.
On a side note, AMD introduced RAM encryption in EPYC which basically makes it immune to this.
If that works properly and sufficiently it might just net them a huge piece of the pie.
-
#122 Reply
Posted by
BravoV
on 05 Jan, 2018 15:21
-
If that works properly and sufficiently it might just net them a huge piece of the pie.
Yep, have a friend that works in big corporation at high level, just told me that regarding their company's major servers refresh program that is due this year, the upper management had decided to rule out Intel based servers as they're going to issue a major purchase order for this Q1.
I can imagine similar scenes are also happening and will happen at least for Q1 and Q2 this year throughout the world big companies.
Looks like 2018 is a good year for AMD's CEO Lisa Su at least.
-
#123 Reply
Posted by
dr.diesel
on 05 Jan, 2018 15:32
-
I plan to do the same, AMD is now competitive enough to suit my/customers needs, but also, Intel needs more competition.
-
#124 Reply
Posted by
cdev
on 05 Jan, 2018 15:39
-
-
#125 Reply
Posted by
Ampera
on 05 Jan, 2018 15:44
-
My next main machine will be AMD. Ryzen if there isn't something better out there.
-
-
Spectre & Meltdown - Computerphile
-
#127 Reply
Posted by
wraper
on 05 Jan, 2018 16:03
-
My next main machine will be AMD. Ryzen if there isn't something better out there.
Mine already is, and I use ECC RAM (ECC not officially supported but not locked out either).
-
#128 Reply
Posted by
cdev
on 05 Jan, 2018 16:05
-
The times they are a changin'
-
#129 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 16:10
-
I plan to do the same, AMD is now competitive enough to suit my/customers needs, but also, Intel needs more competition.
Even without security deliberations, AMD offers a good product at a more than reasonable price. Unlike the previous generations, this seems to be a good choice.
-
#130 Reply
Posted by
bd139
on 05 Jan, 2018 16:13
-
Yeah even I'm looking at a Ryzen based machine to replace my HP Z620. Less power consumption, similar performance, quieter and smaller.
Edit: and not Intel
-
#131 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 16:17
-
Yeah even I'm looking at a Ryzen based machine to replace my HP Z620. Less power consumption, similar performance, quieter and smaller.
Edit: and not Intel
Dat workstation though. Those HP ones tickle me the right way. What processor configuration does yours have?
-
#132 Reply
Posted by
RoGeorge
on 05 Jan, 2018 16:20
-
Spectre & Meltdown - Computerphile
All was clear until the last step.
How exactly are the speculative results extracted?
How come that the speculated values can still leave side effects behind, even after discarding the results?
What are those side effects, and how are they used to access a miss predicted and discarded calculation?
-
#133 Reply
Posted by
BravoV
on 05 Jan, 2018 16:39
-
My next main machine will be AMD. Ryzen if there isn't something better out there.
Damn Intel, already pull the trigger on Ryzen's board just now, its way too early for my budget timing.
Cause our local mobo distributors are really nasty and well known that they love to do price hiking for this kind of occasions. Also locally here mid and upper class motherboard stock are starting to dry out, as usually distributors won't stock pile them as many as compared to low end mainstream ones, and next batch of import may take months to arrive.
Just ordered Asrock X370 Taichi, hopefully this is enough for now.
-
#134 Reply
Posted by
Kalvin
on 05 Jan, 2018 16:46
-
How exactly are the speculative results extracted?
How come that the speculated values can still leave side effects behind, even after discarding the results?
What are those side effects, and how are they used to access a miss predicted and discarded calculation?
If I understood the video correctly, the exploits take advantage of the {timing] information whether or not some [injected] value has been cached by the CPU or not, due to the speculative nature of execution of the instructions of the modern CPUs. You just need to make the CPU to fetch some known data from the memory and use the available high resolution on-chip timers to measure how long does it take to execute that data fetch. If the execution time is "fast", the value was cached and if the execution time was "slow" the value was not in the cache. By using this direct timing information one can extract indirectly the wanted information for the exploit.
-
#135 Reply
Posted by
Cerebus
on 05 Jan, 2018 16:46
-
All was clear until the last step.
How exactly are the speculative results extracted?
How come that the speculated values can still leave side effects behind, even after discarding the results?
What are those side effects, and how are they used to access a miss predicted and discarded calculation?
Cache timings. The speculative fetches leave the fetched data in the cache(s). By requesting something from memory and timing the result, you know whether is was cached or not, so you can probe the cache to see whether it holds something or not, and hence whether it was the target of a speculative fetch.
Mutating that ability into
reading data requires a whole layer more and some knowledge of the data you're hunting that allows you to convert 'it was in the cache' to 'its value is x'. The obvious method is to conditionally fetch some forbidden data based on its content; this will fault, but not before it has speculatively executed the condition, which would control the fetch into cache, which gives you knowledge of whether the condition was met or not.
It's pretty easy to see how you could turn that into a binary tree that chases down the current value of forbidden_location.
-
#136 Reply
Posted by
Decoman
on 05 Jan, 2018 16:53
-
Seeing how there are all these news articles now on the net on this issue with primarily the two critical vulnerabilities nicknamed 'Spectre' and 'Meltdown', I can't help but think how helpless the world is, because at the end of the day, the news outlets seems to me to be more like entertainment than journalism, otherwise I would have wanted to see computer security to be be taken more seriously throughout the whole year, at least on some editorial level, so that there aren't just the occasional horrific event popping up.
And then I think that once reporting of computer security issues becomes this shallow, so as to being more of a public spectacle, I think that also makes the journalism that's is already there non objective, once a journalist makes general statements that maybe seems ok to the journalist there and then, but things considered, would be erroneous when simplifications and generalizations end up being poignant messages that dulls the broader range of issues with anything technical. I suppose that one type of flawed critical thinking would be to arrive at a conclusion of sorts, that dictate that something in particular is flawed (like a known vulnerability in a computer chip), when perhaps it is the underlying feature(s) that can be said to allow catastrophic failures in computer security to exist in the first place. A parallel to this idea of there being a horrible set of features in the first place, would be Adobe's Flash platform, which afaik is so badly tarnished with regard to what I understand as being an ever re-occurring events with 'remote execution vulnerabilities' in the code in the Flash plug-in.
So with regard to the Flash plugin.. some time back, I followed the advice of experts and finally un-installed Flash for good.
I wish anything related to computer software and hardware, was better compartmentalized, and having a perfectly good foundation to have computers running off that. And Linux wouldn't be that kind of software for me, which iirc, is known for working with usability, rather than security. When I one time had an interest in trying out a few Linux distros, the people on IRC seemed to be more like fanboys instead of sensible people, and sort of patting themselves on the back for knowing how to install stuff and set file flags, without really knowing how things work in the kernel. And with Linus living in USA, I feel I can't even trust the management, but that is just me. It didn't help when Linus some years ago was said to have sort of joked in relation to a serious question directed at him, in which it was asked something about if he had ever been approached by the US government to solicit cooperation from him or something like that, and then the man had said 'no', but nodded 'yes'. Not something to joke about.
-
#137 Reply
Posted by
Decoman
on 05 Jan, 2018 16:59
-
As a sort of off-topic comment of sorts, but related to computer security, I can highly recommend watching the yearly talks at the
'RSA conference', called
"The cryptographers' panel" (try speaking the word- cryptographers' - out loud). They had previously some guy that used to work for NSA on the panel (iirc a Mr Brian Snow), but NSA hasn't had a representative there on the panel for a couple of years now.
Here's the 2017 one: (Note reference to NSA's "sweet bee" = suite B)
I think I incidentally read today that one of the individuals that worked with discovering one of these two new vulnerabilities is in fact the host guy seen on the very left just above in the still photo for the video. The bearded guy next to last on the right side, is Whitfield Diffie, who is known for being one of the known inventors of the Diffie Helman key exchange. It has also been pointed out that UK's spy agency also discovered this form of secure key exchange around the same point independently.
https://en.wikipedia.org/wiki/Diffie–Hellman_key_exchange"The scheme was first published by Whitfield Diffie and Martin Hellman in 1976, but in 1997 it was revealed that James H. Ellis, Clifford Cocks and Malcolm J. Williamson of GCHQ, the British signals intelligence agency, had previously[when?] shown how public-key cryptography could be achieved." Wikipedia
Btw, Susan Landau and Whitfield Diffie (both appearing in that video I linked just above), previously coauthored a book together about privacy and computer security (titled "Privacy on the line"), and one interesting point that I remember that was mentioned in the book, is that espionage directly undermines the needed privacy and secrecy between two parties for talks and deals that require there to be a fair agreement. So, if you on your side with espionage learn that the other party are discussing with themselves and agreeing that they will accept an offer of 10 billion $ for something in particular, your party might abuse this piece of private information to undercut the deal, by say offering 9 billion $. So, if you reading this, happen to think that nation state espionage is totally ok, because you expect someone like NSA to acquire such information as simply doing their job, I would argue that you are lauding a type of behavior that is obviously unethical, unjust, unfair, and maybe even criminal in the grand scheme of things.
-
#138 Reply
Posted by
RoGeorge
on 05 Jan, 2018 17:18
-
Still don't get it.
Let's say I have all the timing information, I know if it was a cache fetch or not, and I already tricked the processor into executing the false branch. Now, the speculative execution has finished. The results from the false branch execution are in the cache or in the CPU's registers, but the processor won't give those results to me, because very soon it will discard them all. Those results will be discarded as soon as the processor finds out that the speculative execution was in vain.
How can I read those results before being discarded?
-
#139 Reply
Posted by
edavid
on 05 Jan, 2018 17:30
-
Still don't get it.
Let's say I have all the timing information, I know if it was a cache fetch or not, and I already tricked the processor into executing the false branch. Now, the speculative execution has finished. The results from the false branch execution are in the cache or in the CPU's registers, but the processor won't give those results to me, because very soon it will discard them all. Those results will be discarded as soon as the processor finds out that the speculative execution was in vain.
How can I read those results before being discarded?
A protection fault on a speculatively executed load doesn't cause a cache flush. So, the cached/not cached state is 1 bit of information that is not discarded, and can be read.
-
#140 Reply
Posted by
Cerebus
on 05 Jan, 2018 17:44
-
Still don't get it.
Let's say I have all the timing information, I know if it was a cache fetch or not, and I already tricked the processor into executing the false branch. Now, the speculative execution has finished. The results from the false branch execution are in the cache or in the CPU's registers, but the processor won't give those results to me, because very soon it will discard them all. Those results will be discarded as soon as the processor finds out that the speculative execution was in vain.
How can I read those results before being discarded?
The computational result isn't stored, but the trace of it
having been speculatively calculated is there by it's presence in the cache (albeit with its old, non-speculatively-executed value). If you make that fetching into the cache conditional on some value you aren't supposed to have access to, then that presence in the cache stands as a proxy for the value.
flush X from the cache;
IF forbidden_variable == test_value THEN alter some other value X in a way that loads it into the cache FI
IF X is in the cache THEN implied that forbidden_variable == test_value FI
rinse and repeat
-
#141 Reply
Posted by
timb
on 05 Jan, 2018 18:14
-
There have been designs that fix so many problems with x86. Heck, just starting over with x86 and re-implementing a lot of stuff would make the platform WAY better, but the reason why everybody uses x86, and the reason why I can still run the first version of PC-DOS on a Threadripper is because of backwards compatibility with application code. As more and more code is written for x86, we sink deeper into why nobody will change.
If the new CPU is sufficiently powerful you could do dynamic translation between x86 and the new architecture, or even outright emulate the x86 for legacy code. The former method could run with only a 10-15% drop in performance for most applications. Anything performance oriented would obviously be recompiled for the new architecture relatively quickly.
So, I don’t think legacy applications are what’s keeping x86 around.
In fact, Apple has undergone this very transition. Twice. They went M68k -> PPC -> x86. It was done both time by incorporating a dynamic translation engine into the OS, along with implementing fat binaries for new software (which would contain both PPC and x86 machine code in the same binary, allowing them to natively execute on either architecture). This worked pretty well for them both times.
(Technically there was a third major transition as well, the one between Mac OS Classic and Mac OS X. They literally replaced the entire OS with one that was completely different. The only bridge between them, software wise, was the Carbon API, created specifically for the purpose. Non-Carbon apps could still be run in OS X via the Classic Environment, which ran a full install of Mac OS 9 in what was, in essence, a bare metal virtual machine. OS/2 used a similar concept. Frankly Microsoft should have used this approach with NT and gotten rid of all the old Windows 9x/3.11 cruft altogether.)
-
#142 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 18:34
-
If the new CPU is sufficiently powerful you could do dynamic translation between x86 and the new architecture, or even outright emulate the x86 for legacy code. The former method could run with only a 10-15% drop in performance for most applications. Anything performance oriented would obviously be recompiled for the new architecture relatively quickly.
So, I don’t think legacy applications are what’s keeping x86 around.
In fact, Apple has undergone this very transition. Twice. They went M68k -> PPC -> x86. It was done both time by incorporating a dynamic translation engine into the OS, along with implementing fat binaries for new software (which would contain both PPC and x86 machine code in the same binary, allowing them to natively execute on either architecture). This worked pretty well for them both times.
(Technically there was a third major transition as well, the one between Mac OS Classic and Mac OS X. They literally replaced the entire OS with one that was completely different. The only bridge between them, software wise, was the Carbon API, created specifically for the purpose. Non-Carbon apps could still be run in OS X via the Classic Environment, which ran a full install of Mac OS 9 in what was, in essence, a bare metal virtual machine. OS/2 used a similar concept. Frankly Microsoft should have used this approach with NT and gotten rid of all the old Windows 9x/3.11 cruft altogether.)
The difference is that Microsoft is used much more in professional and corporate settings. Keeping things ultra backwards compatible is part of why they have the market share that they do. Their corporate customers don't like rocking the boat in major way. There's a reason that most of the changes made to Windows 10 can be turned off in the Enterprise and Server editions.
-
#143 Reply
Posted by
cdev
on 05 Jan, 2018 18:36
-
I wouldn't count on any modern CPU, firmware (or perhaps even OS) being free of these kinds of issues because they may be a feature, not a bug.
ya know..
-
#144 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 18:39
-
I wouldn't count on any modern CPU, firmware (or perhaps even OS) being free of these kinds of issues because they may be a feature, not a bug.
ya know..
Shoo!
-
-
The majority of Azure customers should not see a noticeable performance impact with this update. We’ve worked to optimize the CPU and disk I/O path and are not seeing noticeable performance impact after the fix has been applied. A small set of customers may experience some networking performance impact. This can be addressed by turning on Azure Accelerated Networking (Windows, Linux), which is a free capability available to all Azure customers. We will continue to monitor performance closely and address customer feedback.
https://azure.microsoft.com/en-us/blog/securing-azure-customers-from-cpu-vulnerability/
There has been speculation that the deployment of KPTI causes significant performance slowdowns. Performance can vary, as the impact of the KPTI mitigations depends on the rate of system calls made by an application. On most of our workloads, including our cloud infrastructure, we see negligible impact on performance
https://security.googleblog.com/2018/01/more-details-about-mitigations-for-cpu_4.html
All instances across the Amazon EC2 fleet are protected from all known threat vectors from the CVEs previously listed. Customers’ instances are protected against these threats from other instances. We have not observed meaningful performance impact for the overwhelming majority of EC2 workloads.
https://aws.amazon.com/security/security-bulletins/AWS-2018-013/
...Our testing with public benchmarks has shown that the changes in the December 2017 updates resulted in no measurable reduction in the performance of macOS and iOS as measured by the GeekBench 4 benchmark, or in common Web browsing benchmarks such as Speedometer, JetStream, and ARES-6.
...Analysis of these techniques [Spectre] revealed that while they are extremely difficult to exploit, even by an app running locally on a Mac or iOS device, they can be potentially exploited in JavaScript running in a web browser.
https://support.apple.com/en-us/HT208394
-
#146 Reply
Posted by
bd139
on 05 Jan, 2018 19:04
-
Marketing bollocks.
Reality:
https://lkml.org/lkml/2018/1/3/281We’re seeing roughly the same.
Cloud vendors are preventing the investor fall out from having to reduce their prices 20% to offset capacity reduction.
-
#147 Reply
Posted by
Decoman
on 05 Jan, 2018 19:15
-
Anyone thinking that all of this seem a bit complicated and weird, it should be pointed out that nowadays, the encryption on a laptop (some of it I guess) can be broken from recording and analyzing the noise patterns coming from the laptop when measuring the sound with a recording device close by. Pretty obscure stuff.
-
-
This is what worries me most: "can be potentially exploited in JavaScript running in a web browser". Right now, here, as we type...
-
-
I think its not disputed that NSA attempts to get hardware manufacturers to include back doors in hardware. What would be surprising is if there were not any backdoors, not if there were.
I wouldn't count on any modern CPU, firmware (or perhaps even OS) being free of these kinds of issues because they may be a feature, not a bug.
ya know..
Shoo!
And heartbleed was not an accident...
-
-
This is what worries me most: "can be potentially exploited in JavaScript running in a web browser". Right now, here, as we type...
The rest of the paragraph reads....
...Apple will release an update for Safari on macOS and iOS in the coming days to mitigate these exploit techniques. Our current testing indicates that the upcoming Safari mitigations will have no measurable impact on the Speedometer and ARES-6 tests and an impact of less than 2.5% on the JetStream benchmark. We continue to develop and test further mitigations within the operating system for the Spectre techniques, and will release them in upcoming updates of iOS, macOS, and tvOS. watchOS is unaffected by Spectre.
https://support.apple.com/en-us/HT208394
-
#151 Reply
Posted by
bd139
on 05 Jan, 2018 19:37
-
I think its not disputed that NSA attempts to get hardware manufacturers to include back doors in hardware. What would be surprising is if there were not any backdoors, not if there were.
I wouldn't count on any modern CPU, firmware (or perhaps even OS) being free of these kinds of issues because they may be a feature, not a bug.
ya know..
Shoo!
And heartbleed was not an accident...
Heartbleed was definitely an accident. Ive written a few things before with exactly the same cock up in it.
Two entire people were responsible for maintain OpenSSL which is the foundation of a big chunk of all public facing crypto on the planet. You can’t expect even the best two people not to miss some fuck ups in a piece of software written in one of the least well defined languages of all time (C).
This is what worries me most: "can be potentially exploited in JavaScript running in a web browser". Right now, here, as we type...
The rest of the paragraph reads....
...Apple will release an update for Safari on macOS and iOS in the coming days to mitigate these exploit techniques. Our current testing indicates that the upcoming Safari mitigations will have no measurable impact on the Speedometer and ARES-6 tests and an impact of less than 2.5% on the JetStream benchmark. We continue to develop and test further mitigations within the operating system for the Spectre techniques, and will release them in upcoming updates of iOS, macOS, and tvOS. watchOS is unaffected by Spectre.
https://support.apple.com/en-us/HT208394
This is because the timers in JS have enough resolution to be able to reduce cache read times. They are merely removing timer resolution. Firefox has already done this as of v57.
-
#152 Reply
Posted by
bd139
on 05 Jan, 2018 19:38
-
-
#153 Reply
Posted by
RoGeorge
on 05 Jan, 2018 19:45
-
Still don't get it.
....
How can I read those results before being discarded?
The computational result isn't stored, but the trace of it having been...
Just finished reading the original paper for Meltdown,
https://meltdownattack.com/meltdown.pdf. The video does its best, but it was not enough, and yes, the vulnerability is as bad as it can be.
The attack is very clever indeed, but I found the paper totally worth reading it not only for describing the attack, but especially for describing the principles about speculative execution and out of order execution in general, and Intel implementation in special.
-
-
Yes indeed. It doesn’t look good for the IT business at all. I have, as someone deeply involved in the security side of things, considered cashing everything I have in and bailing. It’s too bloody stressful keeping the snowflakes covered in piss alive (google “programming sucks” for context of that comment).
Sorry for derailing the thread topic a bit, but as (primarily) a business guy myself who started out as a technical guy (software, the mechanical engineering, then electronics) - the above comment sounds like music to my ears, from a business standpoint. In other words, you are an expert in a field that is full of fast-paced change, commotion and where there are always new emergencies and endless numbers of "snowflake" clients who need fires put out and assurances given. That sounds like a recipe for high income, being able to be picky about who you take on as clients, and essentially shooting fish in a barrel. What makes you want to cash out your chips and get out? The stress?
-
#155 Reply
Posted by
JoeN
on 05 Jan, 2018 20:30
-
How exactly are the speculative results extracted?
How come that the speculated values can still leave side effects behind, even after discarding the results?
What are those side effects, and how are they used to access a miss predicted and discarded calculation?
If I understood the video correctly, the exploits take advantage of the {timing] information whether or not some [injected] value has been cached by the CPU or not, due to the speculative nature of execution of the instructions of the modern CPUs. You just need to make the CPU to fetch some known data from the memory and use the available high resolution on-chip timers to measure how long does it take to execute that data fetch. If the execution time is "fast", the value was cached and if the execution time was "slow" the value was not in the cache. By using this direct timing information one can extract indirectly the wanted information for the exploit.
The analogy I am using for non-technical people is the CPU basically has a gambler's "tell". The gambler won't tell you his card, each time you ask him if it is a deuce or a three or a four or what he says "piss off". But unfortunately for him, he says it a lot faster when you actually asked the right question.
-
#156 Reply
Posted by
raptor1956
on 05 Jan, 2018 20:40
-
So, what are the odds that the NSA and GCHQ and many other government signals intelligence operations were unaware of this? Wanna bet these exploits are in current use by some of the above?
Brian
-
#157 Reply
Posted by
mtdoc
on 05 Jan, 2018 20:50
-
The ironic thing is that while this is may cause a brief hit to Intel's rep, in the end it probably means selling a whole bunch of new chips.
And new computer sales for Apple, HP, Dell, etc, etc. which means new sales for storage, memory and other peripheral makers...
Based on the stocks reaction today, I think the market may be coming to realize this.
Maybe they can just make all computers disposable with a 1 year shelf life - that'll keep the tech market pumping...
-
#158 Reply
Posted by
bd139
on 05 Jan, 2018 20:52
-
Yes indeed. It doesn’t look good for the IT business at all. I have, as someone deeply involved in the security side of things, considered cashing everything I have in and bailing. It’s too bloody stressful keeping the snowflakes covered in piss alive (google “programming sucks” for context of that comment).
Sorry for derailing the thread topic a bit, but as (primarily) a business guy myself who started out as a technical guy (software, the mechanical engineering, then electronics) - the above comment sounds like music to my ears, from a business standpoint. In other words, you are an expert in a field that is full of fast-paced change, commotion and where there are always new emergencies and endless numbers of "snowflake" clients who need fires put out and assurances given. That sounds like a recipe for high income, being able to be picky about who you take on as clients, and essentially shooting fish in a barrel. What makes you want to cash out your chips and get out? The stress?
You’re right about the recipe. I am however entirely immune to stress. I’m the sort of person who sits there leisurely eating a Cornish pasty while the world burns around me. You don’t solve any problems by getting stressed. Occasionally smashing something that has smited you is recommended however (hat tip to Mr Widlar for that one)
The problem is my brain. I can see the whole abstraction of the machine in my mind, vast networks spanning thousands of nodes and zoom in and out right down to individual components and even lines of code. I can feel it breathing, see where it is sick, see data flows and bottlenecks instantly. I’m sure any programmer understands the moment this clicks (and then the moment someone taps on your shoulder and it all goes away in a puff of smoke).
Problem is none of this really exists and is changing so fast and this screws your mind up over time. Unlike a JVM, you don’t have a garbage collector up there. Makes you sick. Sometimes I just phase out unable to switch between the two worlds. It requires so much space that it pushes things that are important out. My wife can recall so many things going back 20 years. I can’t. Even some memories of my children are vague when they were very young. I attribute this to information overload. Now I can remember which methods to call on windows workflow foundation SQL persistence engine to get it to dance like the monkey it is but this is of no value now as the information is transient as I haven’t used it for nearly 8 years.
Some people attribute this to burn out but it’s something different and far more worrying. I know a few people who have bailed already on this basis. One guy even went mental and shit on his bosses chair and threw himself under a bus, which uneventfully stopped before it ran him over and the driver called an ambulance. Most people I work with are addicts of some kind also.
Ergo I suppose I worry about a cross of mental health and the value of the information I am processing over time. It’s not good for you.
Therefore I’m taking the cash I need out as quickly as possible and filling what precious time and headspace I have with things I care about.
And there you have it.
-
-
Yes indeed. It doesn’t look good for the IT business at all. I have, as someone deeply involved in the security side of things, considered cashing everything I have in and bailing. It’s too bloody stressful keeping the snowflakes covered in piss alive (google “programming sucks” for context of that comment).
Sorry for derailing the thread topic a bit, but as (primarily) a business guy myself who started out as a technical guy (software, the mechanical engineering, then electronics) - the above comment sounds like music to my ears, from a business standpoint. In other words, you are an expert in a field that is full of fast-paced change, commotion and where there are always new emergencies and endless numbers of "snowflake" clients who need fires put out and assurances given. That sounds like a recipe for high income, being able to be picky about who you take on as clients, and essentially shooting fish in a barrel. What makes you want to cash out your chips and get out? The stress?
You’re right about the recipe. I am however entirely immune to stress. I’m the sort of person who sits there leisurely eating a Cornish pasty while the world burns around me. You don’t solve any problems by getting stressed. Occasionally smashing something that has smited you is recommended however (hat tip to Mr Widlar for that one)
The problem is my brain. I can see the whole abstraction of the machine in my mind, vast networks spanning thousands of nodes and zoom in and out right down to individual components and even lines of code. I can feel it breathing, see where it is sick, see data flows and bottlenecks instantly. I’m sure any programmer understands the moment this clicks (and then the moment someone taps on your shoulder and it all goes away in a puff of smoke).
Problem is none of this really exists and is changing so fast and this screws your mind up over time. Unlike a JVM, you don’t have a garbage collector up there. Makes you sick. Sometimes I just phase out unable to switch between the two worlds. It requires so much space that it pushes things that are important out. My wife can recall so many things going back 20 years. I can’t. Even some memories of my children are vague when they were very young. I attribute this to information overload. Now I can remember which methods to call on windows workflow foundation SQL persistence engine to get it to dance like the monkey it is but this is of no value now as the information is transient as I haven’t used it for nearly 8 years.
Some people attribute this to burn out but it’s something different and far more worrying. I know a few people who have bailed already on this basis. One guy even went mental and shit on his bosses chair and threw himself under a bus, which uneventfully stopped before it ran him over and the driver called an ambulance. Most people I work with are addicts of some kind also.
Ergo I suppose I worry about a cross of mental health and the value of the information I am processing over time. It’s not good for you.
Therefore I’m taking the cash I need out as quickly as possible and filling what precious time and headspace I have with things I care about.
And there you have it.
Understood completely. I have exactly the same situation. I have a few different technical areas I work in at my job which are quite separate and different and require a lot of time to keep technically proficient in. I really enjoy each of these different fields, but juggling all of that plus running a business and all the associated tasks including manufacturing/production means I always have a million things going on. I joke that I have the memory of a goldfish - I forget everything that happened more than 2 minutes ago. I use precisely the same description as you - that so much data goes into my brain constantly that most stuff gets squeezed out, leaving me forgetting many things most other people remember.
I don't have a wife/kids but I can definitely see how others would feel you are not "present" enough with them when you suffer from such information overload that you don't recall things and they feel it indicates a lack of care. It doesn't, I know, but I am sure it can appear that way to others.
Well, cheers to you mate for recognizing it as a potential issue and addressing it. Takes a solid husband and father to do so. My respect.
-
#160 Reply
Posted by
floobydust
on 05 Jan, 2018 22:46
-
Linus Torvald called it out:
...
> Any speculative indirect calls in the kernel can be tricked
> to execute any kernel code, which may allow side channel
> attacks that can leak arbitrary kernel data.
"Why is this all done without any configuration options?
A *competent* CPU engineer would fix this by making sure speculation doesn't happen across protection domains. Maybe even a L1 I$ that is keyed by CPL.
I think somebody inside of Intel needs to really take a long hard look at their CPU's, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed.
.. and that really means that all these mitigation patches should be written with "not all CPU's are crap" in mind.
Or is Intel basically saying "we are committed to selling you shit forever and ever, and never fixing anything"?Because if that's the case, maybe we should start looking towards the ARM64 people more.
Please talk to management. Because I really see exactly two possibilities:
- Intel never intends to fix anything
OR
- these workarounds should have a way to disable them.
Which of the two is it?"
Linus
https://lkml.org/lkml/2018/1/3/797
-
#161 Reply
Posted by
nctnico
on 05 Jan, 2018 23:10
-
Some people attribute this to burn out but it’s something different and far more worrying. I know a few people who have bailed already on this basis. One guy even went mental and shit on his bosses chair and threw himself under a bus, which uneventfully stopped before it ran him over and the driver called an ambulance. Most people I work with are addicts of some kind also.
Ergo I suppose I worry about a cross of mental health and the value of the information I am processing over time. It’s not good for you.
What helps is to take up a hobby which doesn't need much thinking but keeps you busy. I'm not a sports person at all but I took up swimming a couple of years ago and it helps to clear&organise my mind.
-
#162 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 23:26
-
I think its not disputed that NSA attempts to get hardware manufacturers to include back doors in hardware. What would be surprising is if there were not any backdoors, not if there were.
Nobody's disputing this, but this thread is not about that. Neither are all the other threads you insist on making into conspiracy stories. My remark was about the continuous pushing of your agenda and derailing of threads.
Don't get me wrong, you seem like a nice guy I could have a drink with, but the persistence is tiring.
-
#163 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 23:28
-
Anyone thinking that all of this seem a bit complicated and weird, it should be pointed out that nowadays, the encryption on a laptop (some of it I guess) can be broken from recording and analyzing the noise patterns coming from the laptop when measuring the sound with a recording device close by. Pretty obscure stuff.
Do you have a link? I think that's the kind of side channel attack that gets a fair bit of attention in regards to mitigation. Of course, an infected laptop could send out intentional sounds or signals that can be used to break encryption. That's a given, but you need to have a foothold already and in those cases you generally have more effective methods to extract data.
-
#164 Reply
Posted by
bd139
on 05 Jan, 2018 23:35
-
Some people attribute this to burn out but it’s something different and far more worrying. I know a few people who have bailed already on this basis. One guy even went mental and shit on his bosses chair and threw himself under a bus, which uneventfully stopped before it ran him over and the driver called an ambulance. Most people I work with are addicts of some kind also.
Ergo I suppose I worry about a cross of mental health and the value of the information I am processing over time. It’s not good for you.
What helps is to take up a hobby which doesn't need much thinking but keeps you busy. I'm not a sports person at all but I took up swimming a couple of years ago and it helps to clear&organise my mind.
Agree entirely. Exercise is a winner every time as well. I’m not a sports person but I found I really like running. Unfortunately this makes me hungry so I ran about 7 miles earlier this week then went in KFC on the way back and consumed my body weight in chicken
Anyone thinking that all of this seem a bit complicated and weird, it should be pointed out that nowadays, the encryption on a laptop (some of it I guess) can be broken from recording and analyzing the noise patterns coming from the laptop when measuring the sound with a recording device close by. Pretty obscure stuff.
Do you have a link? I think that's the kind of side channel attack that gets a fair bit of attention in regards to mitigation. Of course, an infected laptop could send out intentional sounds or signals that can be used to break encryption. That's a given, but you need to have a foothold already and in those cases you generally have more effective methods to extract data.
Rubber hose cryptography is better in this situation
Then again this problem predates computers. My wife’s grandfather was the designer of “quiet rooms” used by the British government around the Cold War era. They, even in the 1960s has worked out you could listen in on conversations by listening to the sounds transmitted through heating pipes in and out of the rooms. They even had rudimentary expertise on deciphering chunks of documents that were being typed from recordings by “golden eared” experts of the pipe sounds.
-
#165 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 23:37
-
The ironic thing is that while this is may cause a brief hit to Intel's rep, in the end it probably means selling a whole bunch of new chips.
And new computer sales for Apple, HP, Dell, etc, etc. which means new sales for storage, memory and other peripheral makers...
Based on the stocks reaction today, I think the market may be coming to realize this.
Maybe they can just make all computers disposable with a 1 year shelf life - that'll keep the tech market pumping...
Unlike many companies think, the world isn't made of money. You can't keep buying new kit and you can't keep migrating. The pace is already quite taxing as it as and adding to it might break the camel's back. There's some room, but buying new computers the whole world over simply isn't an option. Any organisation bigger than tiny is constantly renewing itself to maintain the status quo, stuffing bricks back in the crumbling wall. Many organisations are sitting ducks in a world where cybercrime is rapidly becoming one of the largest and most profitable businesses.
Maybe even more importantly, there's no guarantee the next one won't pop up next month. We've had various hardware dependent attacks the past year. You can't keep buying new stuff every time, not even having it fully deployed when the next one hits.
-
#166 Reply
Posted by
mtdoc
on 05 Jan, 2018 23:38
-
I think its not disputed that NSA attempts to get hardware manufacturers to include back doors in hardware. What would be surprising is if there were not any backdoors, not if there were.
Nobody's disputing this, but this thread is not about that. .
No, I think it is relevant. What are the odds that the NSA was not aware of this and already exploiting it?
Was this really an unintentional "bug"
-
#167 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 23:43
-
No, I think it is relevant. What are the odds that the NSA was not aware of this and already exploiting it?
Was this really an unintentional "bug"
What's the use speculating about that? We won't know, until someone releases the documents. We know that they look for these things, even try to plant them, but we don't know if that's the case here. We do know that we tend to attribute to malice what is actually stupidity. Maybe it is, maybe it's not. We can argue yes or no all we want, but we won't get closer to the truth.
And again, it's also about making every single thread into a conspiracy. It's tiring.
-
#168 Reply
Posted by
bd139
on 05 Jan, 2018 23:48
-
The ironic thing is that while this is may cause a brief hit to Intel's rep, in the end it probably means selling a whole bunch of new chips.
And new computer sales for Apple, HP, Dell, etc, etc. which means new sales for storage, memory and other peripheral makers...
Based on the stocks reaction today, I think the market may be coming to realize this.
Maybe they can just make all computers disposable with a 1 year shelf life - that'll keep the tech market pumping...
Unlike many companies think, the world isn't made of money. You can't keep buying new kit and you can't keep migrating. The pace is already quite taxing as it as and adding to it might break the camel's back. There's some room, but buying new computers the whole world over simply isn't an option. Any organisation bigger than tiny is constantly renewing itself to maintain the status quo, stuffing bricks back in the crumbling wall. Many organisations are sitting ducks in a world where cybercrime is rapidly becoming one of the largest and most profitable businesses.
Maybe even more importantly, there's no guarantee the next one won't pop up next month. We've had various hardware dependent attacks the past year. You can't keep buying new stuff every time, not even having it fully deployed when the next one hits.
What would be nice is FPGA fabric and self reconfigurable computers. Then you can keep a base abstraction which is formal rather than a pile of hacks. If there’s a problem, reconfigure the hardware.
This is a lower level than microcode.
-
#169 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 23:53
-
Some of the exploits Ive read about seem to allow arbitrary code running on a VM to access code running in a higher ring which is supposed to be invisible to the OS, allowing information from other VMS or running on the hardware3 above the kernel and OS space to be accessed..
This all begs the question, what and how do people know when OS's and virtualization end?
Recently it turned out that many processors from one manufacturer had an entire separate CPU with an embedded OS, Minix, on the same die, which could access everything running in the main CPU's OS space, image the drive, etc, including when the processor was supposed to be powered off or hibernating. This evil twin OS ran in something called "ring -3" and it even had a web server. Some traffic going over the net also never was seen by the OS, it went straight to this other CPU.
You can read more here:
http://www.cs.vu.nl/~ast/intel
Not one manufacturer, both major x86 manufacturers. AMD calls it TrustZone and actually has an ARM processor embedded. They aren't some hidden secret either, but sold as a management tool. It's a huge boon not having to manually turn on 2500 computers to run an update. What's new is that the theoretical risks have seen for years have now been converted into an actual, practical thread by a vulnerability and the system being dissected and understood ever more. The security through obscurity started cracking in major ways.
-
#170 Reply
Posted by
Mr. Scram
on 05 Jan, 2018 23:56
-
What would be nice is FPGA fabric and self reconfigurable computers. Then you can keep a base abstraction which is formal rather than a pile of hacks. If there’s a problem, reconfigure the hardware.
This is a lower level than microcode.
I've thought about this the past year a lot. Doing it that way solves a number of problems, but creates others. Having changeable hardware under the hood means never knowing what is actually going on. You'd need some independent way of verifying the configuration of the chip and if it's hardware doing that you're back to square one, if it's a configurable fabric it's turtles all the way down.
-
#171 Reply
Posted by
cdev
on 06 Jan, 2018 00:09
-
But, here's the problem, the author of Minix, who is kind of a luminary in the world of computing, and a significant number of other people far more knowledgeable than myself or I venture to say likely yourself as well, were disturbed enough about this to make a stink about it. And the description I have read about it it doesn't look like that is all it is to me.
Even if that was the intent, then shouldn't it not be present on cheaper HW, since that HW is basically meant to be used by consumers, not in servers, and is basically disposable? But, it is.
Were it just a routine system administration tool, for which the internals were known and public, as you portray it as, the outcry - which was focused on security and privacy implications, would not have happened.
Some of the exploits Ive read about seem to allow arbitrary code running on a VM to access code running in a higher ring which is supposed to be invisible to the OS, allowing information from other VMS or running on the hardware3 above the kernel and OS space to be accessed..
This all begs the question, what and how do people know when OS's and virtualization end?
Recently it turned out that many processors from one manufacturer had an entire separate CPU with an embedded OS, Minix, on the same die, which could access everything running in the main CPU's OS space, image the drive, etc, including when the processor was supposed to be powered off or hibernating. This evil twin OS ran in something called "ring -3" and it even had a web server. Some traffic going over the net also never was seen by the OS, it went straight to this other CPU.
You can read more here:
http://www.cs.vu.nl/~ast/intel
Not one manufacturer, both major x86 manufacturers. AMD calls it TrustZone and actually has an ARM processor embedded. They aren't some hidden secret either, but sold as a management tool. It's a huge boon not having to manually turn on 2500 computers to run an update. What's new is that the theoretical risks have seen for years have now been converted into an actual, practical thread by a vulnerability and the system being dissected and understood ever more. The security through obscurity started cracking in major ways.
-
#172 Reply
Posted by
stj
on 06 Jan, 2018 00:28
-
they arent called "INTEL" for nothing!!
hell, they arent even designed in the west - think about that for a second!!!
i'm pretty sure that breaks rules relating to military procurement.
-
#173 Reply
Posted by
Mr. Scram
on 06 Jan, 2018 00:45
-
But, here's the problem, the author of Minix, who is kind of a luminary in the world of computing, and a significant number of other people far more knowledgeable than myself or I venture to say likely yourself as well, were disturbed enough about this to make a stink about it. And the description I have read about it it doesn't look like that is all it is to me.
Even if that was the intent, then shouldn't it not be present on cheaper HW, since that HW is basically meant to be used by consumers, not in servers, and is basically disposable? But, it is.
Were it just a routine system administration tool, for which the internals were known and public, as you portray it as, the outcry - which was focused on security and privacy implications, would not have happened.
I can't put this any more gently than that it seems that you're filling the gaps of your knowledge with your imagination. The tool being present was well know. It has been a black box for quite a while, but criticized because of exactly that too. I have included a link to the FAQ of the open source BIOS Libreboot, which doesn't support processors with Intel ME. It explains in some detail what it is, does and what its capabilities are. It also includes links to other independent pages with similar information. The page dates July 2015, and isn't manipulated after the fact as I read it myself around that time. There are many other sources with similar information which pre-date this page significantly. One of the links is for instance dated June 2014.
The recent uproar was because it became clear the black box was showing cracks. The thing hidden from sight could now be seen by many people and the protection the obscurity was supposed to bring was gone. Despite the Intel ME and it's capabilities being known, its exact inner workings weren't known. One of the things discovered was that it actually runs MINIX, much to the surprise of the author of that software.
The Intel ME may have been a surprise to the general public, but it hardly was a secret. People who know what they're talking about have been fearing what would inevitably happen for years and the actual source of the uproar was that it was the big "told you so" moment everyone knew was coming. The exposure merely meant the public at large finally caught wind of it.
So please, keep this thread clear of the speculations and theories you tend to line other threads with. The subject is complicated enough as it is and many people already have trouble understanding what actually is going on without FUD being mixed in.
https://web.archive.org/web/20150730233729/http://libreboot.org:80/faq/#intelmehttps://web.archive.org/web/20150908031804/https://www.fsf.org/blogs/community/active-management-technology
-
#174 Reply
Posted by
Cerebus
on 06 Jan, 2018 01:15
-
No, I think it is relevant. What are the odds that the NSA was not aware of this and already exploiting it?
Was this really an unintentional "bug"
No, it wasn't, on the balance of probabilities, deliberate.
I can quite see how the engineers would miss this. Their targets would have been meeting performance goals and providing the security
specified by the architecture, not meeting the security goals that someone with adversarial security experience would consider desirable - which would include quashing any possible side channels. (I can tell you from experience of trying to design systems to be covert channel free that this is
very hard to do on small systems, and
immensely hard to do on large complex systems like the super-scalar, out of order, execution engines that modern CPUs are.)
Speculative execution (and super-scalar processors) are all about trying to
reduce latency. Protection mechanisms
introduce latency. So you try and run the protection checks
in parallel with the speculative execution and only stop the speculative execution once you've got results from the protection checks. This means that you will almost certainly use some protected data for speculative execution before you know the results of the protection checks for that data If you don't, you lose some of the latency advantages of speculative execution.
However, this has side effects, one of which - as we have seen - is polluting the cache with speculative fetches. An adversarial security-minded mindset would have spotted this as an information leak and at least provided an option to stall the speculative execution pipeline with interlocks between protected actions and protection check results, resulting in no cache pollution and hence no information leak.
The problem is one of designing the chipset with a performance mindset and not being aware of the security trade-offs of some of those performance enhancing tricks. In a performance mindset it's OK that a speculative execution that falls foul of a protection check simply fails to retire* those instructions rather than undoes all the side effects of that speculative execution. Done that way there is no
explicit access to that data and the
architectural security model is satisfied. As we have seen, this is not enough to satisfy an adversarial security model that is intolerant of implicit partial data leaks.
*retire in this sense means 'write back the results to architectural registers once the speculative execution becomes non-speculative'.
-
#175 Reply
Posted by
cdev
on 06 Jan, 2018 02:29
-
-
#176 Reply
Posted by
JoeO
on 06 Jan, 2018 04:23
-
Anyone thinking that all of this seem a bit complicated and weird, it should be pointed out that nowadays, the encryption on a laptop (some of it I guess) can be broken from recording and analyzing the noise patterns coming from the laptop when measuring the sound with a recording device close by. Pretty obscure stuff.
Do you have a link? I think that's the kind of side channel attack that gets a fair bit of attention in regards to mitigation. Of course, an infected laptop could send out intentional sounds or signals that can be used to break encryption. That's a given, but you need to have a foothold already and in those cases you generally have more effective methods to extract data.
I think this is the link you are looking for:
https://arstechnica.com/information-technology/2015/10/how-soviets-used-ibm-selectric-keyloggers-to-spy-on-us-diplomats/
-
#177 Reply
Posted by
JoeO
on 06 Jan, 2018 04:27
-
The ironic thing is that while this is may cause a brief hit to Intel's rep, in the end it probably means selling a whole bunch of new chips.
And new computer sales for Apple, HP, Dell, etc, etc. which means new sales for storage, memory and other peripheral makers...
Based on the stocks reaction today, I think the market may be coming to realize this.
Maybe they can just make all computers disposable with a 1 year shelf life - that'll keep the tech market pumping...
I think that right now, no one will be buying Intel laptops or desktops. Why buy a defective product?
As soon as the bugs are fixed in the hardware sales will resume.
This could also mean there will be great sales on Intel Ls and Ds now until the defective stock is cleared out.
-
#178 Reply
Posted by
Mr. Scram
on 06 Jan, 2018 04:45
-
I think that right now, no one will be buying Intel laptops or desktops. Why buy a defective product?
As soon as the bugs are fixed in the hardware sales will resume.
This could also mean there will be great sales on Intel Ls and Ds now until the defective stock is cleared out.
The same might apply to AMD and even ARM. The coming months or even years might be a nightmare for all of them, as a lot of people are going to wait for new silicon to arrive and that could be a matter of years. No easy fixes here, you need an actual change of architecture, albeit not the whole architecture. Almost all hardware currently out there, and certainly Intel hardware, has suddenly lost a significant part of its value. As soon as fixed processors arrive, why knows what happens. They might be considered scrap and the prices for the new stuff might go through the roof. You don't really want to replace all of your infrastructure, but you also don't really want to be the one to explain why you're still running outdated hardware. The only real difference with software patches is that this costs a lot more money.
That'd actually be a fairly horrific scenario, if Intel and AMD get rewarded for having serious issues in the hardware.
-
#179 Reply
Posted by
timb
on 06 Jan, 2018 04:55
-
If the new CPU is sufficiently powerful you could do dynamic translation between x86 and the new architecture, or even outright emulate the x86 for legacy code. The former method could run with only a 10-15% drop in performance for most applications. Anything performance oriented would obviously be recompiled for the new architecture relatively quickly.
So, I don’t think legacy applications are what’s keeping x86 around.
In fact, Apple has undergone this very transition. Twice. They went M68k -> PPC -> x86. It was done both time by incorporating a dynamic translation engine into the OS, along with implementing fat binaries for new software (which would contain both PPC and x86 machine code in the same binary, allowing them to natively execute on either architecture). This worked pretty well for them both times.
(Technically there was a third major transition as well, the one between Mac OS Classic and Mac OS X. They literally replaced the entire OS with one that was completely different. The only bridge between them, software wise, was the Carbon API, created specifically for the purpose. Non-Carbon apps could still be run in OS X via the Classic Environment, which ran a full install of Mac OS 9 in what was, in essence, a bare metal virtual machine. OS/2 used a similar concept. Frankly Microsoft should have used this approach with NT and gotten rid of all the old Windows 9x/3.11 cruft altogether.)
The difference is that Microsoft is used much more in professional and corporate settings. Keeping things ultra backwards compatible is part of why they have the market share that they do. Their corporate customers don't like rocking the boat in major way. There's a reason that most of the changes made to Windows 10 can be turned off in the Enterprise and Server editions.
You can’t really get more backward compatible than running the entire legacy OS in a virtual machine though. In fact, with current versions of Windows, when you try to run a legacy application don’t you essentially download an entire copy of XP that runs in a VM?
Keeping things ultra backwards compatible is also the reason Windows became a security nightmare. All for the benefit of a minority of their customers.
-
#180 Reply
Posted by
Mr. Scram
on 06 Jan, 2018 05:07
-
You can’t really get more backward compatible than running the entire legacy OS in a virtual machine though. In fact, with current versions of Windows, when you try to run a legacy application don’t you essentially download an entire copy of XP that runs in a VM?
Keeping things ultra backwards compatible is also the reason Windows became a security nightmare. All for the benefit of a minority of their customers.
I think you'd be surprised how much of the world is dependent on things like these. There's plenty of banks and similar critical institutions that run much older software, stuck together with tape and prayers of greybeards, which absolutely critical to the well being of entire nations.
It's also not just single machines. It's entire networks and how they interact. There practically never is a point where you throw it all in the bin and start from fresh. You are always building on the choices and mistakes from the past, trying to patch just enough holes to keep things afloat. As I've stated before, the ideas our current software is based on are inherently dated, but the attacks levelled against them are not. The average hardware and software in the field cannot be anything else than behind the curve.
-
#181 Reply
Posted by
Mr. Scram
on 06 Jan, 2018 06:12
-
This is a colorful image but it doesn't sound very realistic.
>stuck together with tape and prayers of greybeards, which absolutely critical to the well being of entire nations.
Huge sums of money are transferred in and out of stocks in countries in microseconds.
Huge fortunes can be made and countries future earnings for decades lost in less time than it takes to wash one's hands of the crime.
I don't think I have much to gain from trying to convince you, though I will point out I was pretty much spot on last time you doubted me in this thread. Considering all the, let's call them theories you subscribe to, it both surprises and amuses me you've picked this one to question.
Obviously, stock trading isn't quite the same as the banking system, and the fact that an old and crumbling highway handles huge amounts of traffic isn't reason to relax and sit back. I'd say it's quite the opposite.
-
#182 Reply
Posted by
Mr. Scram
on 06 Jan, 2018 06:19
-
Oh, what the hey, I'll throw another one in the hat. I think Reuters can be considered a trustworthy source, right?
“Some of the software I wrote for banks in the 1970s is still being used,” said Hinshaw."
"He says banks have a mistaken view of technology. “Banks in the last century all held the view: ‘If ain’t broken, don’t touch it.’ So they had all these core processing systems for deposit accounts and payments that, once built, were never touched again. They were just maintained. This made sense in the past, because building core processing systems cost a lot of money, in terms of development and hardware. “Over the years, the systems were cemented in place by new developments around them, such as ATMs and callcentres. By the time the internet came around, the cement was so thick that internet banking was just added making the cement more like granite rock.”
"The risk is “not so much that an individual may have retired,” Andrew Starrs, group technology officer at consulting firm Accenture PLC, said. “He may have expired, so there is no option to get him or her to come back.”
https://www.reuters.com/article/us-usa-banks-cobol/banks-scramble-to-fix-old-systems-as-it-cowboys-ride-into-sunset-idUSKBN17C0D8http://www.computerweekly.com/news/2240212567/Big-banks-legacy-IT-systems-could-kill-them
-
#183 Reply
Posted by
Decoman
on 06 Jan, 2018 06:19
-
Microsoft seem to have a support page for an update, but then they write this below:
https://support.microsoft.com/en-us/help/4072698/windows-server-guidance-to-protect-against-the-speculative-executionQ1: I wasn’t offered the Windows security updates that were released on January 3, 2018. What should I do?
A1: To help avoid adversely affecting customer devices, the Windows security updates that were released on January 3, 2018, have not been offered to all customers. For details, see Microsoft Knowledge Base Article 4072699.Edit: I am not 100% sure but I think I saw something on twitter having indicating that your antivirus software might be problematic re. new updates. Unsure.
Edit2: Ah, maybe related to the following link from Microsoft:
https://support.microsoft.com/en-us/help/4072699 ("Important: Windows security updates released January 3, 2018, and antivirus software")
-
#184 Reply
Posted by
Decoman
on 06 Jan, 2018 06:33
-
Linus Torvald called it out:"Why is this all done without any configuration options? A *competent* CPU engineer would fix this by making sure speculation doesn't happen across protection domains. Maybe even a L1 I$ that is keyed by CPL. https://lkml.org/lkml/2018/1/3/797
I want to see compartmentalized software and hardware. I for one do not trust Linus and I do not want to the type of idea that Linus Torvald is said to have said here by having lots of configuration options. I am no expert, but I would think that with optional parameters, an accidental, or, ill-willed toggle of an option can make an adversary easily abuse your computer. Why not just remove that possibility of abusing built in parameters by making sensibly sized modular code? It seems obvious that software ought to be more monolithic, such that the piece of software is compiled to your needs,
but also that one ought to be able to authenticate and recognize if a piece of software is:
1) properly coded (not a single instance of there being an omission of a ; character in the code for example, NOR, a single instance of there being a superfluous character in the code)
2) has the features you want and nothing more (at least as per the official guide)
3) Is secure against tampering (presumably, something that could be verified by means of some kind of authentication)
Afaik, one example of parameters being known to have been abused, is the fall back option of export ciphers using something called 'Dual EC DRBG', in which this patently flawed piece of crypto ended up being used by some, for "secuirty". The 'Dual Elliptic Curve Deterministic Random Bit Generator)' is also known as being a standard that was pushed by NIST, apparently after having been paid some millions of dollars by NSA, where one is now speculating that NSA paid NIST to have a vulnerability/backdoor built into computers/software.
https://en.wikipedia.org/wiki/Dual_EC_DRBGIf there is something I've learned about cryptography, it is that there are certain things you
must not have in your implemented cipher code design for sake of security, things like: a seed number acting as as hidden initialization vector for some piece of crypto math, hidden patterns, hardcoded numbers, dynamic numbers that reflect the date and mimicking other known data values, "home brewed crypto ciphers", and ofc, any other "up your sleeve" type of math/numbers. So far, the ideal is afaik one way functions, in which an error in just 1 bit is enough to transmute a cipher text into a seemingly random stream of 0's and 1's, and using prime numbers is afaik one way to do this to avoid trivial factorization of numbers, when also scaled to take into account what kind of computing power is required to scramble an encrypted message sufficiently, to not be decrypted in the next 10-20-50-100 years.
I think that at the very least, a secure method of communicating between a website is required, and even better if there are other ideas to authenticate valid webpages, code and software supposed to having been downloaded from a trusted supplier.
I personally think it would be a nice idea,
if only naively here, to get to have
software (code) turned into hardware, which you then can put checkers on with hardware only (something that just works and isn't subject to a never ending cycle of re-occurring updates), and that you can view/review with your own eyes by taking the hardware out and looking at it. I imagine some kind of thin circuit plate that can be inspected (at least for the critical parts, for sake of compartmentalization of running software on hardware, as opposed to building it all into some obscure package like a damn cpu). Maybe something that could also bridge hobby electronics with regular people I imagine.
Imagine having to now worry about hidden unseen connections in a transparent circuit board (as if one initially trusted to be able to see all the wiring paths in copper on the circuit board, and now having to worry about transparent copper or subtle doping with graphite material).
As long as severed flaws like Heartbleed happening (iirc someone being able to dump the server memory because of a flaw in the code used for networking protocols), there is imo no good point pointing a finger at how users are too dumb to manage their computer. I think it should be obvious that the industry is shit and "science" and "math" isn't there in the world as some existing and neutral party to it all to help out (and after all, the
implementation of code and things has to be good and flawless, and as Bruce Scheier have said, "you are the product" (think: corporations stealing and abusing your personal data). Ofc, I don't fully trust that guy either to be this neutral party, who I personally think of as being either too naive, and who apparently thinks that nation state espionage is just ok on a general basis, being on the record for having opined a broad sweeping statement that point out that the NSA is doing a job that he expects of them to do (or something to that effect, I don't have a quotation ready at hand), who by now should knowing well that NSA and the like is involved in shady stuff and also involved in killing people with drones on the other side of the Earth. And as I think I pointed out some other time here on the forums earlier, that guy met with congress in a hearing and simply agreed to the very general notion that innovation is very important but without explaining what it meant (and iirc the subtext for that piece of discussion was that the congress panel in that hearing had stated a problem of not wanting to create rules and regulations that would be at odds with 'innovation' (whatever that could mean, I thought of it as potentially wanting to avoid putting regulations for mass surveillance software/hardware and the way the internet allows for mass surveillance).
-
#185 Reply
Posted by
A Hellene
on 06 Jan, 2018 07:51
-
they arent called "INTEL" for nothing!!
hell, they arent even designed in the west - think about that for a second!!!
i'm pretty sure that breaks rules relating to military procurement.
Ah! A hit right on the head of the nail!
Even though
Eurovision, for example, tries its best to teach the flock otherwise...
-George
-
#186 Reply
Posted by
bd139
on 06 Jan, 2018 08:03
-
I thought Eurovision was a comedy. And intel but that’s another story.
-
#187 Reply
Posted by
A Hellene
on 06 Jan, 2018 08:17
-
Of course, Eurovision has always been a bad joke, promoting the --according to
their nomenclature-- NWO; just see
the grave outcome today...
As Intel has also become...
The question is, why have they decided to reveal that right now?
-George
-
#188 Reply
Posted by
bd139
on 06 Jan, 2018 09:14
-
M’kay!
-
#189 Reply
Posted by
nctnico
on 06 Jan, 2018 10:21
-
Linus Torvald called it out:
A *competent* CPU engineer would fix this by making sure speculation doesn't happen across protection domains. Maybe even a L1 I$ that is keyed by CPL.
That is easy to say in hindsight. I read a little bit about how the actual hack works and it isn't very straightforward. The way I understand it they make the CPU execute code using the branch prediction feature which accesses data to which the process shouldn't have access to. By calculating the time it takes it can be determined (bit by bit !) what data is at that address. The problem seems to be that the CPU checks the protection of the memory area AFTER the code has executed but BEFORE the result is marked as valid.
-
#190 Reply
Posted by
timb
on 06 Jan, 2018 11:37
-
You can’t really get more backward compatible than running the entire legacy OS in a virtual machine though. In fact, with current versions of Windows, when you try to run a legacy application don’t you essentially download an entire copy of XP that runs in a VM?
Keeping things ultra backwards compatible is also the reason Windows became a security nightmare. All for the benefit of a minority of their customers.
I think you'd be surprised how much of the world is dependent on things like these. There's plenty of banks and similar critical institutions that run much older software, stuck together with tape and prayers of greybeards, which absolutely critical to the well being of entire nations.
It's also not just single machines. It's entire networks and how they interact. There practically never is a point where you throw it all in the bin and start from fresh. You are always building on the choices and mistakes from the past, trying to patch just enough holes to keep things afloat. As I've stated before, the ideas our current software is based on are inherently dated, but the attacks levelled against them are not. The average hardware and software in the field cannot be anything else than behind the curve.
Oh, trust me, I know! I spent the first 12 years of my adult life in IT. I recognized what was coming and got out (switched to electrical engineering). Best move I ever made.
As of 5 years ago there were still bank ATMs running OS/2 Warp! I think they’re all gone by this point, however, some critical banking software is still based around OS/2, obviously it has to run in VM, but it’s still there; the kicker is that some of these OS/2 applications are in and of themselves virtualized environments, which were created in the 1990’s to run software originally created in the 1970’s! This is why I keep my money in my mattress.
The Financial Industry: Our Software is Turtles All the Way Down
Anyway, the article you quoted essentially proves the point I was trying to make. The banking industry is in the mess they’re in because they didn’t plan ahead and keep up with improvements in technology. Other industries are in a similar situation.
-
#191 Reply
Posted by
Decoman
on 06 Jan, 2018 11:52
-
Twitter is really great for getting to learn about stuff, and so i have something like 100+ twitter accounts loaded from various people that I trawl through a few times a day.
Sadly, I have now no longer have all the tabs loading in the background, so going through all of them and loading them one by one is ofc much slower now as opposed to when I only had 50 twitter pages open.
Meanwhile on twitter:
I don't know that guy, nor if the paper he is referring to is legit so to speak, but it makes me wonder "oh, this sounds like it might be interesting".
-
#192 Reply
Posted by
MT
on 06 Jan, 2018 22:21
-
As of 5 years ago there were still bank ATMs running OS/2 Warp! I think they’re all gone by this point, however, some critical banking software is still based around OS/2, obviously it has to run in VM, but it’s still there; the kicker is that some of these OS/2 applications are in and of themselves virtualized environments, which were created in the 1990’s to run software originally created in the 1970’s! This is why I keep my money in my mattress.
For heavens sake dude, dont tell people who know your physical IP location you have your Fiat money in your mattress!
The Financial Industry: Our Software is Turtles All the Way Down
Anyway, the article you quoted essentially proves the point I was trying to make. The banking industry is in the mess they’re in because they didn’t plan ahead and keep up with improvements in technology. Other industries are in a similar situation.
There is absolutely nothing wrong or failing with the finical industry, they are better off then ever before, just look into the paradise and panama papers!
Its us, the involunteered screwed, the bankpenis in the rectum people who have a hard time!
The banks psychopath oligarchs secret agenda is to end all Fiat moneys and implement oligarch psychopath controlled bitcoins, thats a lot worse!
-
#193 Reply
Posted by
station240
on 06 Jan, 2018 22:22
-
-
-
The banks psychopath oligarchs secret agenda is to end all Fiat moneys and implement oligarch psychopath controlled bitcoins, thats a lot worse!
No more cash, only electronic transactions, no way to hide a penny, then they'll have got us by the balls. Game over.
-
-
-
#196 Reply
Posted by
bd139
on 06 Jan, 2018 23:33
-
This is by far the best two responses from the OpenBSD guys:
https://marc.info/?l=openbsd-misc&m=151522749523849&w=2https://marc.info/?l=openbsd-tech&m=151521473321941&w=2This is after Theo on numerous occasions pretty much said this was coming.
Also check this performance degradation report:
https://www.epicgames.com/fortnite/forums/news/announcements/132642-epic-services-stability-updateOur nginx load balancers are running about 35% hotter as well. Looking at migration to more efficient bits of tech (HAproxy for example). Increasing capacity means $$$
Shit is indeed fucked.
Oh BTW don't believe all that shit about the banks in certain states. The main use of OS/2 was a TN3270 node while they ported front office stuff to other tech (mainly Java) which was a big job. It wasn't connected to the Internet and talked to branch AS/400 platforms which all talked to massive mainframes. The security abstraction was actually on the fully supported AS/400 platforms. Now it's all a combination of z-series, piles of front end caches (to support OLB), lots of windows servers (RBS/Natwest anyway) and JBoss (HSBC).
It doesn't matter if the tech is old. It is supported.
As for your fiat money, it's bits of paper and metal. It has no real value. Look at Maslow's hierarchy of needs and build a backup plan based on trading incremental steps up the ladder. If you want security, build an empire.
-
#197 Reply
Posted by
cdev
on 06 Jan, 2018 23:41
-
-
#198 Reply
Posted by
bd139
on 06 Jan, 2018 23:53
-
Globalresearch.ca is "alternative news" aka bollocks. India's demonetisation was a big move to try and kill part of their black economy which was built on forgery of the 500/1000 Rs notes mainly. 99% of notes were returned and cashed. They are back again now, with new security features.
I don't think people have realised that most of these large organisations, banks and anything seen to be evil by conspiracy theorists have a few interesting attributes:
1. Incompetence. Any significant mass of humans (usually about 2 or above) can't step forward rationally together. This isn't some grand conspiracy.
2. There is no global elite controlling anything. Because they couldn't fucking agree on what to control (see 1).
3. Shit sticks together in lumps. Sometimes big lumps.
4. People like money. I like money. Money hangs around in banks. Sometimes you get stuck to the shit lump. I keep doing it.
(Source: have worked for retail and commercial banking outfits and it's a turdfest of incompetence and nothing more. If they had an agenda it's what sandwich to have from Pret or possibly what seat covering in their Nissan Qashqai that the dog is going to piss all over the day they get it)
-
#199 Reply
Posted by
wraper
on 06 Jan, 2018 23:59
-
Why the Raspberry PI isn't effected
Simple explanation of how the modern CPU pipelining works.
https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/
Yeah, and Arduino probably isn't affected either. Or my Casio calculator.
, but many other ARM CPUs ARE affected. Like one in my phone. Don't compare arduino with a device which basically is a PC, just lower performance. For example, BeagleBone Black which is similar to raspberry pi is affected by Spectre.
-
#200 Reply
Posted by
Mr. Scram
on 07 Jan, 2018 00:07
-
-
#201 Reply
Posted by
timb
on 07 Jan, 2018 13:02
-
As of 5 years ago there were still bank ATMs running OS/2 Warp! I think they’re all gone by this point, however, some critical banking software is still based around OS/2, obviously it has to run in VM, but it’s still there; the kicker is that some of these OS/2 applications are in and of themselves virtualized environments, which were created in the 1990’s to run software originally created in the 1970’s! This is why I keep my money in my mattress.
For heavens sake dude, dont tell people who know your physical IP location you have your Fiat money in your mattress!
That’s why I keep the mattress itself in the back of a Fiat, which I park in a different location each night.
-
#202 Reply
Posted by
bd139
on 07 Jan, 2018 13:28
-
As a Fiat owner, that's probably the most secure place to leave it. No one is going to steal that pile of shit.
-
#203 Reply
Posted by
Monkeh
on 07 Jan, 2018 13:29
-
As a Fiat owner, that's probably the most secure place to leave it. No one is going to steal that pile of shit.
And if they tried it probably wouldn't start anyway. Or get up the first hill they encounter.
-
#204 Reply
Posted by
bd139
on 07 Jan, 2018 13:41
-
Yes. It's the only car I've ever had that the OBD2 cable gets used as much as the key.
-
#205 Reply
Posted by
SparkyFX
on 07 Jan, 2018 16:07
-
Given the complexity of your typical personal computer, plus the whole operating system, plus gear needed to connect it to a network that´s obviously too much knowledge needed for a single person to be an expert in all these areas, which means there must be a dependency of or problem with trust all along the way.
OTOH, no one ever said you can securely connect something to a network without someone else being able to interact with it, because the whole purpose of the connection is an interaction with other nodes.
Is this now a big problem? No, it made the news and everyone goes ape, but in effect it means an additional test case for anti-virus software, which you need anyway if you want to protect such a system.
-
#206 Reply
Posted by
wraper
on 07 Jan, 2018 16:35
-
Is this now a big problem? No, it made the news and everyone goes ape, but in effect it means an additional test case for anti-virus software, which you need anyway if you want to protect such a system.
It's a big problem for servers, as some are hit with up to 50% CPU performance drop after security patch like some Epic games servers. If you think that antivirus is effective against such flaw, then you are clueless.
-
#207 Reply
Posted by
andersm
on 07 Jan, 2018 16:44
-
Wasn't AMD working on something to replace the x86-Architecture for consumer-computers? I remember reading something like that one or two years back. Would be the perfect time to present the new CPU-Architecture now
The K12 core is AFAIK supposed to be based on the same Zen architecture as Ryzen and friends, but it was postponed after AMD saw how good performance they were getting out of Ryzen. But Spectre and Meltdown are implementation issues, not architecture issues.
-
#208 Reply
Posted by
David Hess
on 07 Jan, 2018 17:49
-
Wasn't AMD working on something to replace the x86-Architecture for consumer-computers? I remember reading something like that one or two years back. Would be the perfect time to present the new CPU-Architecture now
The K12 core is AFAIK supposed to be based on the same Zen architecture as Ryzen and friends, but it was postponed after AMD saw how good performance they were getting out of Ryzen. But Spectre and Meltdown are implementation issues, not architecture issues.
AMD was going to make a desktop performance ARM ISA processor at some point which shared the x86 infrastructure (sort of like DEC Alpha and AMD Athon?) but I do not remember why it was cancelled.
Intel intended the Pentium 4 to be the last x86 processor series to be replaced by Itanium until AMD rained on their parade with their 64 bit Opteron and Athlon64 processors.
-
#209 Reply
Posted by
andersm
on 07 Jan, 2018 18:00
-
AMD was going to make a desktop performance ARM ISA processor at some point which shared the x86 infrastructure (sort of like DEC Alpha and AMD Athon?) but I do not remember why it was cancelled.
Yes, that is the
K12. It is not officially cancelled, but AMD have understandably decided to focus on Ryzen for now.
-
#210 Reply
Posted by
Towger
on 07 Jan, 2018 18:59
-
lots of windows servers (RBS/Natwest anyway)
Ah yes, the bank which managed to f*ck up both their main and backup mainframes and had no backups of the scripts they lost. The fallout from this is still ongoing years later. Takes a long time to take customers to court after they have runaround between branchs/ATMs taking out money.
A fine example of outsourcing at its best...
-
#211 Reply
Posted by
bd139
on 07 Jan, 2018 19:00
-
Yes but that was not the fault of the technology. Merely the humans which I outlined elsewhere in another post.
-
#212 Reply
Posted by
Ampera
on 07 Jan, 2018 22:28
-
Alright guys, I have an almost pointless CPU-Z benchmark done on my i7-4790k before and after the meltdown patch:
Before:
After:
As you guys, gals, and various species of intelligent cephalopod can clearly see, straight performance has not really gone down, and this makes sense. This affects specific workloads, which I have not measured at the moment, but I honestly don't use. My day to day performance isn't ruined, but your mileage may vary, especially if you are using VMs.
-
#213 Reply
Posted by
wraper
on 07 Jan, 2018 22:35
-
Among home users measurable impact is for those who use NVMe SSD.
CrystalDisk 6 results Samsung 960 PRO 2TB NVMe
Before:
After:
-
-
Alright guys, I have an almost pointless CPU-Z benchmark done on my i7-4790k before and after the meltdown patch:
Before:
After:
As you guys, gals, and various species of intelligent cephalopod can clearly see, straight performance has not really gone down, and this makes sense. This affects specific workloads, which I have not measured at the moment, but I honestly don't use. My day to day performance isn't ruined, but your mileage may vary, especially if you are using VMs.
I think i read somewhere that the fix has to be enabled to take effect, you might want to check if that's true.
-
#215 Reply
Posted by
Mr. Scram
on 07 Jan, 2018 22:39
-
Alright guys, I have an almost pointless CPU-Z benchmark done on my i7-4790k before and after the meltdown patch:
Before:
After:
As you guys, gals, and various species of intelligent cephalopod can clearly see, straight performance has not really gone down, and this makes sense. This affects specific workloads, which I have not measured at the moment, but I honestly don't use. My day to day performance isn't ruined, but your mileage may vary, especially if you are using VMs.
What does CPU-Z actually test? You can't just translate that to your personal use.
-
-
What does CPU-Z actually test? You can't just translate that to your personal use.
Word processing, image processing, web browsing and some other stuff, if i remember correctly.
-
#217 Reply
Posted by
Ampera
on 07 Jan, 2018 23:08
-
It does something. Idk, I said this wasn't a great benchmark, was just something I had lying around.
I've overclocked to 4.7ghz if on 2 cores and 4.6ghz if on 4 cores, and it seems to be working fine, and that should counteract any issues I'm having.
As for the SSD, that almost seems to be within some strange margin of error, as the writes have gone up, but the reads have gone down. I don't really see how NVMe drives would be affected, but who knows, maybe I'm sniffing snot.
-
#218 Reply
Posted by
wraper
on 07 Jan, 2018 23:11
-
It does something. Idk, I said this wasn't a great benchmark, was just something I had lying around.
I've overclocked to 4.7ghz if on 2 cores and 4.6ghz if on 4 cores, and it seems to be working fine, and that should counteract any issues I'm having.
As for the SSD, that almost seems to be within some strange margin of error, as the writes have gone up, but the reads have gone down. I don't really see how NVMe drives would be affected, but who knows, maybe I'm sniffing snot.
Don't look at sequential read/write. Those are not typical loads and also highly vary during test iterations as well. Look how 4kiB Q32 went down by 30%.
-
#219 Reply
Posted by
Ampera
on 07 Jan, 2018 23:18
-
It does something. Idk, I said this wasn't a great benchmark, was just something I had lying around.
I've overclocked to 4.7ghz if on 2 cores and 4.6ghz if on 4 cores, and it seems to be working fine, and that should counteract any issues I'm having.
As for the SSD, that almost seems to be within some strange margin of error, as the writes have gone up, but the reads have gone down. I don't really see how NVMe drives would be affected, but who knows, maybe I'm sniffing snot.
Don't look at sequential read/write. Those are not typical loads and also highly vary during test iterations as well. Look how 4kiB Q32 went down by 30%.
That's gotta suck. I run a SATA SSD so I'm not affected, but damn.
-
#220 Reply
Posted by
Mr. Scram
on 07 Jan, 2018 23:19
-
That's gotta suck. I run a SATA SSD so I'm not affected, but damn.
I doubt SATA is going to be less affected. If so, only because its inherent slower performance might be hiding the actual performance hit. The underlying kernel calls aren't going to be much different.
-
#221 Reply
Posted by
Ampera
on 07 Jan, 2018 23:59
-
That's gotta suck. I run a SATA SSD so I'm not affected, but damn.
I doubt SATA is going to be less affected. If so, only because its inherent slower performance might be hiding the actual performance hit. The underlying kernel calls aren't going to be much different.
I haven't noticed anything
-
#222 Reply
Posted by
Marco
on 08 Jan, 2018 00:37
-
I wonder how they are going to solve Spectre.
If there just happen to be microcode instructions available to wipe the BTB that would be awfully convenient.
-
#223 Reply
Posted by
SparkyFX
on 08 Jan, 2018 00:51
-
Is this now a big problem? No, it made the news and everyone goes ape, but in effect it means an additional test case for anti-virus software, which you need anyway if you want to protect such a system.
It's a big problem for servers, as some are hit with up to 50% CPU performance drop after security patch like some Epic games servers. If you think that antivirus is effective against such flaw, then you are clueless.
Antivirus is always ineffective against the vulnerability itself, it won´t magically patch that. But it can always scan for code that follows a pattern or for specific exploits, and yes, there are self-encrypting ones and yes, it´s always high profile.
Nevertheless can an impact on CPU load only be measured after a patch has been applied.
I come to think it might even be a problem to give a definite number, as this is speculative execution.
-
#224 Reply
Posted by
andersm
on 08 Jan, 2018 04:48
-
Among home users measurable impact is for those who use NVMe SSD.
I'd say there's a measurable impact for those who run storage benchmarks (with fast media), since those will be issuing a very large number of syscalls, while a CPU benchmark will be negligibly affected since they hardly issue any syscalls at all.
-
#225 Reply
Posted by
BravoV
on 08 Jan, 2018 04:52
-
Interesting to see how this will affect video rendering like Dave does, as he must be aware or at least feel it, as he did it a lot if there is say a 20% impact.
-
-
Yep, moar syscalls => moar slowdown, because after the patch syscalls may take up to twice longer than before, IIANM.
-
#227 Reply
Posted by
andersm
on 08 Jan, 2018 20:52
-
Video rendering is almost entirely compute-bound, so the effects should be small.
-
#228 Reply
Posted by
bd139
on 08 Jan, 2018 20:55
-
Apart from the massive IO...
-
#229 Reply
Posted by
Decoman
on 09 Jan, 2018 09:55
-
-
#230 Reply
Posted by
wraper
on 09 Jan, 2018 10:06
-
Why lots of people are using password managers in the first place, I don't know. Seems wildly insecure to me, though I am ofc no expert.
Because otherwise you'll need to recycle your passwords which is much worse. You cannot remember 10's of them and remember from which particular website they are.
-
#231 Reply
Posted by
Jeroen3
on 09 Jan, 2018 10:09
-
Why lots of people are using password managers in the first place, I don't know. Seems wildly insecure to me, though I am ofc no expert. Maybe I am being weird, but why put all your important "eggs" in one big digital basket?
There is a trade-off between convenience and security. You can't have both.
A password manager is convenient and somewhat safe compared to only one password, easy passwords or post-its.
Looking at the system and compatibility with humans, password managers are an acceptable solution.
It's bad news that the meltdown bug is this easy to exploit. With prefabricated victim software...
-
#232 Reply
Posted by
Decoman
on 09 Jan, 2018 10:10
-
Hm, it looks to me that my win 7 computer has MAYBE been updated by 'windows update' to patch the known vulnerabilities for 'Meltdown' and 'Spectre', with KB4056894. The Microsoft article about this update, doesn't spell it out, and I have to freaking read other people's articles about KB4056894, which might not even be correct.
https://support.microsoft.com/en-us/help/4056894/windows-7-update-kb4056894
-
#233 Reply
Posted by
Decoman
on 09 Jan, 2018 10:12
-
Why lots of people are using password managers in the first place, I don't know. Seems wildly insecure to me, though I am ofc no expert.
Because otherwise you'll need to recycle your passwords which is much worse. You cannot remember 10's of them and remember from which particular website they are.
Are you saying that a password manager re-creates new passwords? That sounds unlikely, as I would think that you would have to manually change the passwords for all your websites anyway. Am I perhaps missing something here? (Edit: I guess a password manager could generate a random string of characters, but I would think that you would still do some manual work to start the process of changing passwords for every single website.)
Btw, I am thinking that professionally, so called 'key management' is an important aspect to say the military afaik, which has to issue new keys around so that they this way won't allow re-using old keying material, which presumably would be bad for anything to do with operational security. For civilian use, with the poor infrastructure of the internet, and computing in general, I can't imagine that it is a good idea to keep re making your passwords if the passwords were long and complicated in the first place. I would think that anyone having placed a keylogger on your keyboard or in your computer, like say some organization, would then be able to round up all your new passwords in a much shorter period of time.
-
#234 Reply
Posted by
andtfoot
on 09 Jan, 2018 10:25
-
Why lots of people are using password managers in the first place, I don't know. Seems wildly insecure to me, though I am ofc no expert.
Because otherwise you'll need to recycle your passwords which is much worse. You cannot remember 10's of them and remember from which particular website they are.
Are you saying that a password manager re-creates new passwords? That sounds unlikely, as I would think that you would have to manually change the passwords for all your websites anyway. Am I perhaps missing something here?
Btw, I am thinking that professionally, so called 'key management' is an important aspect to say the military afaik, which has to issue new keys around so that they this way won't allow re-using old keying material, which presumably would be bad for anything to do with operational security. For civilian use, with the poor infrastructure of the internet, and computing in general, I can't imagine that it is a good idea to keep re making your passwords if the passwords were long and complicated in the first place. I would think that anyone having placed a keylogger on your keyboard or in your computer, like say some organization, would then be able to round up all your new passwords in a much shorter period of time.
You have to go around to the different websites to change the passwords, but the creation of the password can be randomly generated and usually can be auto-filled into the relevant fields. It means you can have a complicated unique password for each site without having to remember all of them.
-
#235 Reply
Posted by
wraper
on 09 Jan, 2018 10:30
-
Are you saying that a password manager re-creates new passwords? That sounds unlikely, as I would think that you would have to manually change the passwords for all your websites anyway. Am I perhaps missing something here?
You have one master password for password manager and separate password for each website. I can say from my own experience that using the same password on multiple websites is a very bad idea (I'm registered on at least 40 websites). I had my password stolen from one hacked website, I became aware of it when my skype account which had the same password started sending spam links. Then I changed that password where I remembered. But like after a year I got a call from airline company because someone tried to spend my bonus miles. And yep, I forgot to change my password there. Just in case to not look that stupid - I had separate passwords for things that involved money.
EDIT: I certainly know that password was stolen from a hacked website as I checked my email in pwned database. And Skype spam came from login via website, not my computer. Also I've seen same skype spam coming from few other people, guess their passwords were stolen the same way.
-
#236 Reply
Posted by
Decoman
on 09 Jan, 2018 11:26
-
I like having huge random chars large passwords for anything remotely important, but on a piece of paper.
-
#237 Reply
Posted by
wraper
on 09 Jan, 2018 13:20
-
I like having huge random chars large passwords for anything remotely important, but on a piece of paper.
And you need to carry this piece of paper with you everywhere in the world. Someone else can steal a password from it, like your wife who is checking if you are cheating
. Also it becomes completely impractical once there are more than a few passwords or when you need to change them.
-
#238 Reply
Posted by
nfmax
on 09 Jan, 2018 13:26
-
I use a box of filing cards...
-
-
Hiya
Just gone through this thread - does the problem affect UltraSparc 4+ processors??
Is it time to power up my Sparc V490 servers?? (noisy beasts though)
Cheers
-
#240 Reply
Posted by
Ampera
on 09 Jan, 2018 15:13
-
There may be problems with any CPUs that have speculative execution as part of the design, but you would have to test it out to see for sure, or go looking for someone who already has.
-
#241 Reply
Posted by
langwadt
on 09 Jan, 2018 15:15
-
Why lots of people are using password managers in the first place, I don't know. Seems wildly insecure to me, though I am ofc no expert. Maybe I am being weird, but why put all your important "eggs" in one big digital basket?
There is a trade-off between convenience and security. You can't have both.
A password manager is convenient and somewhat safe compared to only one password, easy passwords or post-its.
Looking at the system and compatibility with humans, password managers are an acceptable solution.
It's bad news that the meltdown bug is this easy to exploit. With prefabricated victim software...
post-its seems like the safest option
-
#242 Reply
Posted by
paulca
on 09 Jan, 2018 15:18
-
Apart from the massive IO...
But would that not be accomplished via a few memory map DMA calls. Map the input files into memory, map the output file to memory, compute between the two. The actual IO is handled by the kernel, DMA controller and MMU via page faults.
-
#243 Reply
Posted by
andersm
on 09 Jan, 2018 15:26
-
Apart from the massive IO...
It's about the amount of work per syscall. I assume a video encoder would read and write fairly large chunks of data while spending a lot of time processing each one, meaning the overhead would be low. A GPU-decelerated codec would be impacted more due to having to frequently call into the graphics driver.
-
#244 Reply
Posted by
bd139
on 09 Jan, 2018 15:31
-
Depends as much on the filesystem implementation as the load profile. Consider nasty shit like MFT fragmentation on NTFS.
-
#245 Reply
Posted by
bd139
on 09 Jan, 2018 15:32
-
Apart from the massive IO...
But would that not be accomplished via a few memory map DMA calls. Map the input files into memory, map the output file to memory, compute between the two. The actual IO is handled by the kernel, DMA controller and MMU via page faults.
Depends on how the privilege separation works. If you’re working through a hypervisor this may make no difference.
-
#246 Reply
Posted by
andersm
on 09 Jan, 2018 15:45
-
Depends as much on the filesystem implementation as the load profile. Consider nasty shit like MFT fragmentation on NTFS.
Fragmentation doesn't concern the application.
-
#247 Reply
Posted by
paulca
on 09 Jan, 2018 16:39
-
To be honest I'm not all that concerned. I mostly live in Linux. Linux is vunerable of course, but it is far less likely to be running malicious code than your average home Windows box. About 99% of my Linux software is open source and compiled from source. Therefore if it has malware in it it would be spotted and removed.
There is still a risk, but it's much less than your average Windows box which are metaphorically like a Saigon hooker. So much nasty stuff in it you can see them crawling down the desktop's legs! (sorry you got that image).
I do have a Windows laptop and a gaming machine which will now be put on quarantine, so no online banking, no sensitive stuff etc.
-
#248 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 16:39
-
Meanwhile on twitter:
https://twitter.com/misc0110/status/948706387491786752
Why lots of people are using password managers in the first place, I don't know (Correction: I guess what had me wondering was, why would people think a password manage to be secure?). Seems wildly insecure to me, though I am ofc no expert. Maybe I am being weird, but why put all your important "eggs" in one big digital basket?
https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)
Also.. Reconstructing images from the memory apparently:
https://twitter.com/mlqxyz/status/950378419073712129
Sensible people don't put all their eggs in one basket. Using a password manager doesn't mean putting every single last password in there. However, we live in a world where you literally need an account to go to the barber and you can't realistically remember loads of different passwords. That's why you use a manager to keep track of passwords.
A sensible password strategy uses tiers for passwords of different values. It's also good to realize that experts are aware passwords aren't ideal, but that we also haven't found the perfect replacement yet. There isn't one golden strategy, just incremental insights into what are bad ideas. Using the same password in multiple places is a bad idea. Using weak passwords is a bad idea. That's where password management comes in.
-
#249 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 16:43
-
To be honest I'm not all that concerned. I mostly live in Linux. Linux is vunerable of course, but it is far less likely to be running malicious code than your average home Windows box. About 99% of my Linux software is open source and compiled from source. Therefore if it has malware in it it would be spotted and removed.
There is still a risk, but it's much less than your average Windows box which are metaphorically like a Saigon hooker. So much nasty stuff in it you can see them crawling down the desktop's legs! (sorry you got that image).
I do have a Windows laptop and a gaming machine which will now be put on quarantine, so no online banking, no sensitive stuff etc.
Your hubris might be expensive. You browse the web, I presume? Javascript is seen as a major possible vector and will run just as happily on your box as it does on Windows. You can compile from source until the cows come home and still have your passwords taken from under your nose. Security through obscurity never works.
Fanboy stances on this OS or that also don't help. The problem almost always is the user and rarely ever the OS. If it really were the OS, Windows wouldn't be so dominant in the very security concious enterprise market.
-
#250 Reply
Posted by
paulca
on 09 Jan, 2018 16:55
-
To be honest I'm not all that concerned. I mostly live in Linux. Linux is vunerable of course, but it is far less likely to be running malicious code than your average home Windows box. About 99% of my Linux software is open source and compiled from source. Therefore if it has malware in it it would be spotted and removed.
There is still a risk, but it's much less than your average Windows box which are metaphorically like a Saigon hooker. So much nasty stuff in it you can see them crawling down the desktop's legs! (sorry you got that image).
I do have a Windows laptop and a gaming machine which will now be put on quarantine, so no online banking, no sensitive stuff etc.
Your hubris might be expensive. You browse the web, I presume? Javascript is seen as a major possible vector and will run just as happily on your box as it does on Windows. You can compile from source until the cows come home and still have your passwords taken from under your nose. Security through obscurity never works.
Fanboy stances on this OS or that also don't help. The problem almost always is the user and rarely ever the OS. If it really were the OS, Windows wouldn't be so dominant in the very security concious enterprise market.
I'll update my browsers, but on that front I'm angry. I have warned people for years to keep the damn script kiddies at bay. Now we even have Javascript on servers (NodeJS) FFS and Javascript with memory access. WTF? Idiots. One of the worst, most annoying languages ever written. It should have remained a junk interrupted, sandboxed, mickey mouse script tool for making text flash and modifying HTML. </rant>
Compiling from source. It's not obscurity, that's not the point of compiling from source. In fact it's the polar opposite.
Windows is popular in the security conscious enterprise market because of the centralized management AD et al, because of "single vendor", because of support, because of... on and on and on. None of them are to do with it being secure. Secure enterprises can afford the teams of people required to keep a windows network lock down enough to keep it secure. I'm sitting in one right now.
-
#251 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 17:15
-
I'll update my browsers, but on that front I'm angry. I have warned people for years to keep the damn script kiddies at bay. Now we even have Javascript on servers (NodeJS) FFS and Javascript with memory access. WTF? Idiots. One of the worst, most annoying languages ever written. It should have remained a junk interrupted, sandboxed, mickey mouse script tool for making text flash and modifying HTML. </rant>
Compiling from source. It's not obscurity, that's not the point of compiling from source. In fact it's the polar opposite.
Windows is popular in the security conscious enterprise market because of the centralized management AD et al, because of "single vendor", because of support, because of... on and on and on. None of them are to do with it being secure. Secure enterprises can afford the teams of people required to keep a windows network lock down enough to keep it secure. I'm sitting in one right now.
My remark about obscurity wasn't related to self compiling code, though I doubt it helps much.
The truth is all OSs are unsafe and very leaky. Windows, macOS or Linux, it's all the same. The main difference is that Windows is much more popular, and is therefore targeted more. Any modern OS is such a huge pile of code that it's inevitable to be full of errors and vulnerabilities. No OS escapes this.
Besides, this isn't an OS vulnerability. This is a hardware vulnerabilty. OS updates can mitigate it, but won't solve it. We're in this together.
-
#252 Reply
Posted by
nfmax
on 09 Jan, 2018 17:23
-
The problem with Javascript in the browser is that nowadays, to get better performance, it is downloaded as (minified) source and then compiled to native machine code instead of bytecode (like Java) or interpreted from source. Since the process 'sandbox' is now broken, thanks to Meltdown & Spectre, it is unsafe to download, compile and run ANY language, be it Javascript, Erlang, Snobol or CORAL66. At least with Java you only have to worry about the security of the JVM, not the actual code you may download. We will have to wait and see what these forthcoming browser 'fixes' amount to.
Server-side Javascript is no more or less dangerous than any other compiled language. The 'good parts' of Javascript, in its latest versions, are quite a pleasant programming language.
-
#253 Reply
Posted by
paulca
on 09 Jan, 2018 18:33
-
The truth is all OSs are unsafe and very leaky. Windows, macOS or Linux, it's all the same. The main difference is that Windows is much more popular, and is therefore targeted more. Any modern OS is such a huge pile of code that it's inevitable to be full of errors and vulnerabilities. No OS escapes this.
You can't just bundle all operating systems together. They are all susceptible to bugs and exploits yes, but there are fundamental differences.
The one that is relevant is the ability to install software. In Windows software is "live" out of the box. On most domestic systems it can install itself and run itself, anything with a .exe is fair game and fully trusted software. The stupid notifications you get are just ignored by 90% of people who invariably click "OK". You just can't do that on Linux, for one you have to be root, second you have to actually mark it executable, thirdly, outside of a distribution Linux is not binary compatible, so one size doesn't fit all. This is why Linux viruses are incredibly rare and don't propagate anywhere near as easily. There are dozens of other examples between the two OSes to compare security, but it's safe to say that it is a lot easier to sneak mal-code onto a windows machine than a Linux one. More recent versions of Windows are improving, but the basic architecture remains insecure as a multi-user system, it can be tamed, but it takes a LOT of effort to lock it down. But lets not delve into that rabbit hole of this OS verus that (though granted I started it).
To exploit these hardware vulnerabilities you need to execute malicious code. That was my point. This is harder to do on Linux, historically and architecturally.
-
#254 Reply
Posted by
paulca
on 09 Jan, 2018 18:41
-
Server-side Javascript is no more or less dangerous than any other compiled language. The 'good parts' of Javascript, in its latest versions, are quite a pleasant programming language.
I just hate it. I hate it's history, I hate it ethos, I hate it's structure.
It's got nothing to do with Java, that was just because someone invited the marketing team to the naming meeting and Java was a new buzz word. It was never meant to be a language in the first place just a browser automation engine. If anything it's more like Clojure which was a clunky, bizarre 1970s recursive language that still survive today in places. Everything is a function that takes a function which returns a function with takes a function. <shudder>
Newer Javascript with modern frameworks like Angular is 'tolerable', but only just and if you look under the hood of Angular or NodeJS you find it jumping through all the hundreds of hoops and work arounds to make the thing work. I spent a number of years writing enterprise front ends in Angular.
It's just a personal opinion, but Javascript is a botch that should have remained consigned to making text marques and flashing banners. A browser automation framework. Pampering the it's proponents and allowing it to develop into a compiled application language with raw memory, file, network access was a mistake, IMHO. But maybe I'm just being bitchy.
https://www.destroyallsoftware.com/talks/wat
-
#255 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 18:48
-
You can't just bundle all operating systems together. They are all susceptible to bugs and exploits yes, but there are fundamental differences.
The one that is relevant is the ability to install software. In Windows software is "live" out of the box. On most domestic systems it can install itself and run itself, anything with a .exe is fair game and fully trusted software. The stupid notifications you get are just ignored by 90% of people who invariably click "OK". You just can't do that on Linux, for one you have to be root, second you have to actually mark it executable, thirdly, outside of a distribution Linux is not binary compatible, so one size doesn't fit all. This is why Linux viruses are incredibly rare and don't propagate anywhere near as easily. There are dozens of other examples between the two OSes to compare security, but it's safe to say that it is a lot easier to sneak mal-code onto a windows machine than a Linux one. More recent versions of Windows are improving, but the basic architecture remains insecure as a multi-user system, it can be tamed, but it takes a LOT of effort to lock it down. But lets not delve into that rabbit hole of this OS verus that (though granted I started it).
To exploit these hardware vulnerabilities you need to execute malicious code. That was my point. This is harder to do on Linux, historically and architecturally.
I have absolutely no desire to start another war about OSs. Linux has been traditionally been less targeted because it's both less popular on the desktop and much more fragmented, as you state correctly. That's not the same as it being inherently secure, but whatever is the case, that's not a discussion suitable for this thread and not one I desire to pursue. Everyone can consider an OS of choice to be superior for whatever reasons he desires. I don't care.
It's also of no relevance to the current problem. Most Linux distributions have a browser in it out of the box and are therefore as susceptible to the kind of code execution that's needed for this vulnerability as any other OS. Linux, Windows, macOS, AMD, Intel - they're all at risk.
We really need to focus on solving this problem the best we can without getting sidetracked by irrelevant squabbles.
-
#256 Reply
Posted by
paulca
on 09 Jan, 2018 18:53
-
We really need to focus on solving this problem the best we can without getting sidetracked by irrelevant squabbles.
True but I really don't think the desktop is the issue we have right now. Why hack one person when you can hack a million people?
-
-
What is the exposure really? If I go home and watch youtube all night, my yt password might be exposed? Assuming MS and Google don't just automatically load up all my passwords in memory. They probably do...
-
#258 Reply
Posted by
Monkeh
on 09 Jan, 2018 18:58
-
We really need to focus on solving this problem the best we can without getting sidetracked by irrelevant squabbles.
True but I really don't think the desktop is the issue we have right now. Why hack one person when you can hack a million people?
Because the admins running the server with a million people to hack are paying attention, and the million people at home are burying their heads in the sand.
-
#259 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 19:02
-
True but I really don't think the desktop is the issue we have right now. Why hack one person when you can hack a million people?
The biggest threat by far is indeed the server space, where many servers often share the same hardware. This allows code execution on or from a server completely separated from yours, outside of your control. Obviously, most servers are maintained well and will therefore be patched properly. It's likely that malicious people will then target individual users, like ransomware has been doing lately. Code running from a website that's able to recover your administrator or sudo password or encryption passwords or keys from the computer's memory is an absolute nightmare, though websites aren't the only vector imaginable. There are many ways of running scrips in userspace, mainly because we have always counted on the separation doing its job. There is very little mitigation, because we never counted on it being a possibility. Even worse, some forms of mitigation in other areas make a system more vulnerable to this problem.
-
#260 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 19:22
-
What is the exposure really? If I go home and watch youtube all night, my yt password might be exposed? Assuming MS and Google don't just automatically load up all my passwords in memory. They probably do...
Anything that's in memory can be read if someone manages to run code in userspace, the latter typically not being considered high risk. When you visit a website, this happens all the time. This vulnerability means malicious people can intercept any password, like your administrator password, encryption passwords and keys, SSL keys, you name it. In fact, they can intercept anything they want, but it makes more sense to go for short snippets of valuable data that give access to the other data. They can help themselves to everything that keeps the internet, your data and your money safe. You use encryption all the time without even realizing it, like when you browse this website. Our modern world is literally built on the protection this grants us and that protection is potentially gone. Obviously, with your passwords being exposed, you can then completely own a system and all the data on it.
At the same time, we shouldn't overstate the reach of this vulnerability. It doesn't mean attackers can take over your system without executing code on your side and it also doesn't mean they can just alter data or take over a system directly. That can be a consequence of what they learn, but the primary issue is that data that is supposed to be inaccessible and secret can be recovered and funnelled away.
When you know this, you might also understand why people are so worried about enterprise and cloud environments, where many customers often share the same underlying hardware thanks to virtualization. It means that a fully patched and completely up to date server can be attacked by code run on another virtual server sharing the same hardware, recovering passwords, databases, encryption keys and more. This is the primary worry everyone has, as it's how almost the whole internet is constructed. Only after that there are worries about individual systems. When criminals can't attack well maintained servers, they might very well attack much less well maintained computers at home.
However, with the flaw being in the hardware and not in the software it seems we might be able to mitigate the risk, but experts aren't sure it can actually be made safe without changing the hardware. Obviously, changing all the hardware in the world isn't done overnight, and isn't very economically and logistically feasible. We're still learning about the vulnerabilities, so it may turn out to be workable in the end or we may discover that we really do need to replace everything eventually to be completely safe.
If this were just about your Youtube password being exposed, we wouldn't have heard about it.
-
#261 Reply
Posted by
paulca
on 09 Jan, 2018 21:21
-
It's also not that easy to exploit to full potential, if I understand it correctly. A lot of it is poking around randomly in the dark.
To get a full targeted exploit of something like a browser the attacker needs to be very specific about memory addresses and vectors so needs an understanding of the running programs memory in addition to a good understanding the kernel address space routines that will yield the details they need. Such as the TLB maps and process descriptors.
Not saying because it's difficult it won't be done, but it's not like every script kiddy malware writer out there will do it either.
Note that the original 'authors' of the exploit determined they could read out kernel memory at something like 500Kbps. So to dump the whole kernel would take quite a while. Then they have to analyse that and find various offsets to the programs running, then know specifics about those programs to re-run the exploit and try and access the correct physical memory locations via the cached out-of-order executions access those locations.
As I understand it anyway.
There are a lot of moving parts to a successful exploit and making it generic enough to run in Javascript in a browser and target any PC or any random application will not be easy or potentially even possible. I expect attacks will need to be much more targetted.
Password managers have been mentioned and might probably be a primary target. As will browser password auto-complete stores etc.
One good thing is it's read only. So they can't hack bits of memory to hijack things directly.
-
#262 Reply
Posted by
nfmax
on 09 Jan, 2018 22:55
-
One good thing is it's read only. So they can't hack bits of memory to hijack things directly.
Of course, if they can find and read the root/administrator password somewhere in kernel memory, then they can easily leverage that into total ownership of the system...
-
#263 Reply
Posted by
nctnico
on 09 Jan, 2018 23:15
-
And ofcourse only Windows7 & 8 get slower
-
#264 Reply
Posted by
bd139
on 09 Jan, 2018 23:20
-
On Unixes, passwords never exist in kernel memory. The kernel is only aware of UID and GID which it keeps in the kernel data structure. It has no idea what a password even is.
Only passwd and login handle passwords and they are user mode programs. passwd is setuid and login is only executable by root.
At best if there are stale pipes in memory then those could be revealed but that’s it. Even ssh is a user mode process and the kernel will only handle encrypted streams.
If they unmap the kernel and only the process and any libraries it talks to are loaded into the address space then this is total isolation. The process can’t see the kernel nor read any other processes. All it does is put some shit in some registers, pray to the kernel and it falls entirely out of existence when god (Linux) answers the prayer. If you get the prayer wrong or poke around the wrong bits of the universe, behold for you are killed, unless you’re slightly attached to the universe still at which point you are a zombie. I don’t like zombies. Zombies eat your brains.
Windows: fuck knows. Between COM, bits of msgina, lsass, bits of kernel OM, some sticky tape, string and some dead rodents, your guess is as good as mine. This is the company that managed to put LSASS in a little Hyper-V sponsored pit of despair, declare security victory and only the next day end up with a CVE. MSFT can’t outrun some crap kicked out by some hippies from the 1970s on way too much green that hasn’t changed a whole lot.
Psss idiots.
Disclaimer: slightly too much wine this evening.
-
#265 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 23:23
-
And ofcourse only Windows7 & 8 get slower
Is that speculation on your part or based on numbers?
-
#266 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 23:25
-
On Unixes, passwords never exist in kernel memory. The kernel is only aware of UID and GID which it keeps in the kernel data structure. It has no idea what a password even is.
Only passwd and login handle passwords and they are user mode programs. passwd is setuid and login is only executable by root.
At best if there are stale pipes in memory then those could be revealed but that’s it. Even ssh is a user mode process and the kernel will only handle encrypted streams.
If they unmap the kernel and only the process and any libraries it talks to are loaded into the address space then this is total isolation. The process can’t see the kernel nor read any other processes. All it does is put some shit in some registers, pray to the kernel and it falls entirely out of existence when god (Linux) answers the prayer. If you get the prayer wrong or poke around the wrong bits of the universe, behold for you are killed, unless you’re slightly attached to the universe still at which point you are a zombie. I don’t like zombies. Zombies eat your brains.
Windows: fuck knows. Between COM, bits of msgina, lsass, bits of kernel OM, some sticky tape, string and some dead rodents, your guess is as good as mine. This is the company that managed to put LSASS in a little Hyper-V sponsored pit of despair, declare security victory and only the next day end up with a CVE. MSFT can’t outrun some crap kicked out by some hippies from the 1970s on way too much green that hasn’t changed a whole lot.
Psss idiots.
Disclaimer: slightly too much wine this evening.
All OSs are vulnerable. The traditional flame wars can be omitted, this one hits everyone.
-
#267 Reply
Posted by
bd139
on 09 Jan, 2018 23:27
-
The mitigation strategies are different and the surface area is smaller on Unixes. Way smaller.
Also there is secondary mitigation with MAC (SELinux) which kills off a huge portion of entry vectors. Bar timing attacks via browsers, which are now pretty much mitigated by reducing timer resolution, the main attack vector is system access because you need to run arbitrary code on the target.
This isn’t an OS war, it’s a mitigation architecture war now.
Plus it looks like we can get some performance back now in a few months, looking at Linux 4.14. PCID is coming in. Incidentally OSX already uses thisnas does Hyper-V but not windows server (wtf)
This has been my life since it dropped for ref.
-
#268 Reply
Posted by
Mr. Scram
on 09 Jan, 2018 23:56
-
The mitigation strategies are different and the surface area is smaller on Unixes. Way smaller.
Also there is secondary mitigation with MAC (SELinux) which kills off a huge portion of entry vectors. Bar timing attacks via browsers, which are now pretty much mitigated by reducing timer resolution, the main attack vector is system access because you need to run arbitrary code on the target.
This isn’t an OS war, it’s a mitigation architecture war now.
Plus it looks like we can get some performance back now in a few months, looking at Linux 4.14. PCID is coming in. Incidentally OSX already uses thisnas does Hyper-V but not windows server (wtf)
This has been my life since it dropped for ref.
Please note that this paragraph is a generic rant, not aimed at you. I'm so sick of the petty pissing contests that break out whenever an OS is mentioned. "My OS better because..." Nobody cares. Every OS has merits the others don't. Every OS has some serious problems the others don't. They all sorta kinda work. Nobody cares that you compile from source [Linux], or have the biggest software library [Windows], or have a market share too small to make malware viable [macOS]. Shoo, go call your mother. She'll be happy to hear from you, as opposed to the rest of the world.
I don't believe reducing the resolution solves the problem. So far it only seems to make the attack noisier, but we all know that's just a matter of integrating a bigger dataset. But sure, maybe all the little bits add up and make an attack impractical.
Besides, it doesn't matter what the size of the hole is if the ship is sunk. We're all boned, and working as hard as we can to get unboned. That's all we can do right now.
-
#269 Reply
Posted by
bd139
on 10 Jan, 2018 00:15
-
Indeed. Couldn’t agree more.
I’ve actually got about 500 machines on all platforms to save from this mess. There are no winners really. Everything is fucked, slow or on fire. Also some vendors who have patched their appliances have patched too quickly and botched it. Total nightmare.
Reducing resolution is adding a work factor yes. Is it enough, we don’t know yet. A good point.
So back in about 1995 I should have taken the the other coloured pill at this moment in time. Any one want to hire an EE. Will solder for pennies.
-
#270 Reply
Posted by
Mr. Scram
on 10 Jan, 2018 00:30
-
Indeed. Couldn’t agree more.
I’ve actually got about 500 machines on all platforms to save from this mess. There are no winners really. Everything is fucked, slow or on fire. Also some vendors who have patched their appliances have patched too quickly and botched it. Total nightmare.
Reducing resolution is adding a work factor yes. Is it enough, we don’t know yet. A good point.
So back in about 1995 I should have taken the the other coloured pill at this moment in time. Any one want to hire an EE. Will solder for pennies.
"There are no winners really. Everything is fucked, slow or on fire. Also some vendors who have patched their appliances have patched too quickly and botched it. Total nightmare."
Yeah. Companies are just throwing out updates, which are obviously not thoroughly tested, will probably not fix the whole problem and likely cause other issues. We've seen a number of those already. Everyone is running around confused and making it up as they go along.
As it happens, I have some CCTV footage from actual IT departments. It's fairly grim stuff:
-
#271 Reply
Posted by
BravoV
on 10 Jan, 2018 03:13
-
-
#272 Reply
Posted by
nctnico
on 10 Jan, 2018 07:02
-
And ofcourse only Windows7 & 8 get slower
Is that speculation on your part or based on numbers?
Dutch news article.
-
#273 Reply
Posted by
nfmax
on 10 Jan, 2018 08:35
-
On Unixes, passwords never exist in kernel memory.
They are never left hanging around for their sentimental value, naturally, but they do exist transiently in the buffers of the HID input driver or console serial driver.
Disclaimer: slightly too much wine this evening.
Entirely understandable, given the circumstances
-
#274 Reply
Posted by
DimitriP
on 10 Jan, 2018 09:04
-
And ofcourse only Windows7 & 8 get slower
All those years most people thought "Intel inside" meant something different
It was just like the warning on every windows machine "Starting Windows"...
-
-
Probably the best answer to this is to do all browsing in a virtual machine.
That said, the real issue here is that this has existed since 1995 and no security expert noticed it until now. During that time it may well have been exploited by bad guys. There is no way of telling if it has.
When you think about it, if you forget to lock your house or car, the security issue doesn't arise when you realise you did so. It arises when you walk away without locking it. Likewise, assuming this vuln hasn't been exploited in over 20 years of the computer being 'left unlocked' is naive. This is the fallacy behind the idea that patching and updating makes a computer secure. It is only marginally better than a placebo.
Especially as there are thousands of similar vulns in all operating systems, still unpatched.
Linux and MacOS are in principle no better then Windows in this respect, since they are all based on C, whose buffer overflow risk is the No1 cause of vulns. What is really needed is a completely new OS that ditches all of this bad code.
The decision to use C instead of Pascal for the IBM PC, must rank as the single worst decision in the entire history of computing. That said, the really inexcusable thing is that C was allowed to become entrenched in the IT industry even after its security flaws became apparent. So deeply entrenched that it's now going to take the IT equivalent of D-Day to oust it.
-
#276 Reply
Posted by
bd139
on 10 Jan, 2018 09:39
-
I agree with C being a big problem but it's not the root of all evil here. We also don't use C for anything at all these days. It's python, C#, Java and Go. Go is looking like a good replacement for C in a lot of areas.
However none of the above languages prevent this issue. Also a VM won't help you at all here. And people did notice problems in this space as far back as 2005/7.
There are three travesties here:
1. We've allowed the glorified 8080 to progress as long as it has, tacking more bits to it, shovelling more turds onto the pile and hiding a RISC processor and all sorts of "go faster" hacks underneath it.
2. We've blindly used bits of computer science from the 1960's without formal verification on modern architectures. Turns out with all the hacks above, the output is not as deterministic as people were hoping.
3. There is actually no need for processors as fast as we have to get work done. We've been burning the "it's cheaper to make it fast later" paradigm candle.
-
#277 Reply
Posted by
paulca
on 10 Jan, 2018 10:38
-
3. There is actually no need for processors as fast as we have to get work done. We've been burning the "it's cheaper to make it fast later" paradigm candle.
But you favor virtual machine based languages over C?
I'm a bit confused by the aggression against C. Buffer overrun is not a C issue, it's a CPU issue. If you ask the CPU to read memory and that memory is mapped it will. Having languages/VM boundary check all your array accesses would have significant impact on performance and trash various coding styles with dynamic unbound arrays.
Bad code is bad code, moving it between languages is just like using public toilet paper, it just moves the problem around.
Pascal? Seriously? Look up what pascal was intended for. Teaching students. Yet oddly it's one of the weirder of the languages. Only slightly better than BASIC in a lot of regards. Thankfully I haven't touched Pascal since the Amiga in the 90s.
-
#278 Reply
Posted by
bd139
on 10 Jan, 2018 10:47
-
I favour virtual machine based languages over C because it's down to the implementation of the language to do the optimisations rather than the end user and it doesn't require recompilation then to add and remove instrumentation or optimisations.
Buffer overrun is a C issue because C has a virtual machine model as well (stack frames + heap) and offers no compile or execution time guarantees at all.
Pascal was rather nice, particularly in a p-code virtual machine environment. I have seen programs written on one architecture running on another before without modification and this was in the late 1980s. Then there was Turbo Pascal which paired with DESQview turned your little 386 into a stupidly reliable multitasking workstation.
Ada is another good one. Did a bit of that on PPC in the late 1990s. Absolutely bomb proof, apart from when it was actually used in a bomb and blew up.
-
-
I'll update my browsers, but on that front I'm angry. I have warned people for years to keep the damn script kiddies at bay. Now we even have Javascript on servers (NodeJS) FFS and Javascript with memory access. WTF? Idiots. One of the worst, most annoying languages ever written. It should have remained a junk interrupted, sandboxed, mickey mouse script tool for making text flash and modifying HTML.
Sorry, I like C as much as JS, both have their things, but which language hasn't?
((a,b)=>{[22,24,25].forEach((i)=>b+=a[i]+' '),alert(b+=a[30].split('').reverse().join(''))})("I have warned people for years to keep the damn script kiddies at bay. Now we even have Javascript on servers (NodeJS) FFS and Javascript with memory access. WTF?".replace(/[(.?.)]/g,'').split(' '),'')
-
#280 Reply
Posted by
bd139
on 10 Jan, 2018 11:11
-
Sorry, I like C as much as JS, both have their things, but which language hasn't?
Common LISP
-
-
Dear applications programmers, systems programming is none of your business, so stop disparaging C you silly script kiddies.
--
Linus T.
-
#282 Reply
Posted by
paulca
on 10 Jan, 2018 11:22
-
Buffer overrun is a C issue because C has a virtual machine model as well (stack frames + heap) and offers no compile or execution time guarantees at all.
Done much assembler? C does this for a reason. If you write any complex "functions" in asm you will find yourself using self rolled stack frames. Though I agree that is some ways you are forced to do things how the compilers wants and have little to no control over that.
Heap is a C++ thing. C you are on your own with malloc et al. C++ has the whole botch local, reference, pointer, heap (new/delete construct/destruct) bollox that causes endless confusion and bugs, 16 different ways to swing the cat so nobody understands each others code.
All this said when you move to higher level languages you don't lose the baggage, you just add more on top and lose performance and control.
-
#283 Reply
Posted by
Marco
on 10 Jan, 2018 12:22
-
systems programming is none of your business, so stop disparaging C you silly script kiddies.
Once you learn to completely avoid buffer overflows and use after free, so never.
-
#284 Reply
Posted by
bd139
on 10 Jan, 2018 13:02
-
Buffer overrun is a C issue because C has a virtual machine model as well (stack frames + heap) and offers no compile or execution time guarantees at all.
Done much assembler? C does this for a reason. If you write any complex "functions" in asm you will find yourself using self rolled stack frames. Though I agree that is some ways you are forced to do things how the compilers wants and have little to no control over that.
Yes. I've written a couple of non trivial compilers as well. The problem is that the hardware provides no isolation guarantees between stack frames. There are some admittedly clever things like stack canaries but these aren't comprehensive
Heap is a C++ thing. C you are on your own with malloc et al. C++ has the whole botch local, reference, pointer, heap (new/delete construct/destruct) bollox that causes endless confusion and bugs, 16 different ways to swing the cat so nobody understands each others code.
All this said when you move to higher level languages you don't lose the baggage, you just add more on top and lose performance and control.
malloc/free is just an abstraction over a heap. I have written allocators. Agree with C++. The problem with C++ is complexity.
Not quite true with higher level languages. Your job is to solve something in the problem domain, not incur problems outside of the problem domain. 99.9% of all problems don't need a system programming language and 80% of the problems in the system programming domain don't need low level languages. Literally the only two things that need direct memory access not via an abstraction are talking to hardware and any IO buffering etc.
Programmers have lost the right to manage their own memory at this point.
All of the above is solved easily (common lisp etc) but there's a lot of investment in a shitty way to do stuff that people don't want to burn or admit was wrong.
-
#285 Reply
Posted by
paulca
on 10 Jan, 2018 13:46
-
Programmers have lost the right to manage their own memory at this point.
That's a slippery slope though. Just look at Enterprise Java. I get angry everyday about over engineering. Object models of data structures and ORM frameworks, abstract, parallel builder patterns and the whole works. When you cancel it all out against each other the net is 0. 0 fucking point. They argue that it abstracts complexity into frameworks, but those frameworks don't get included in their complexity calculations... Again public toilet toilet roll, it just moves the turd around. Solving complexity with complexity is the Emperors new clothes of Object Orientation which has gone way too far IMHO that it's getting to ridiculous extremes.
The relevance I want to draw out, is that while you might be right that most programmers should stay in higher levels, they should be absolutely FORCED to spend some time doing low level C, Asm etc. So they at least know what the computer is doing underneath. Giving juniors high level languages from the get go and not providing them with any low level experience leads to extremely inefficient code.
I have seen people importing a mulit-megabyte jar in Java, rewriting a bunch of classes to support the framework in that jar all just to use a single string formatting function for dates! Utter madness.... and they still got the fecking timezones wrong and it made it to production because nobody ever tested it in another timezone!
More related to electronics, I watched a video the other night that measured the "digitalWrite()" function in the Arduino libs taking approximately 179 instruction cycles! Using it is fine, as long as you understand the cost.
-
#286 Reply
Posted by
bd139
on 10 Jan, 2018 13:57
-
No one said enterprise Java. That's it's own special turd
Agree with forcing people to do low level stuff as well. I think you should start at the bottom and work your way up. Step 1: here's a resistor ... Step 50: here's an AbstractBeanFactory.
Don't talk to me about time zones. We can't get anyone who knows their shit on that front. It's impossible. It's quite a difficult subject. I dealt with an event tracking system a while ago which used date spans. Turns out some clever fucker stored the start date in UTC and end date in local time. Bring on DST, all the events dropped an hour in length. CHAOS this caused. That's the sort of shit I get paid to fix.
Arduino is horrid. I use neat AVR-GCC for that banana and use the arduino as a dev board.
-
#287 Reply
Posted by
dmills
on 10 Jan, 2018 14:33
-
I would add to the low level direct memory pile anything needing DMA, anything with short deadline real time constraints, anything running without an OS where the peripherals are registers mapped into the memory map, and anything that needs to be deterministic (To the point of running from non cachable memory sometimes!).
C & C++ undoubtedly get used way outside the appropriate application domains, with C++ adding the fun of a leaky, fragile and complex set of abstractions, but actually there is a reason people still write in those languages that goes way past inertia. And some of us just like a language that (like Latin) stays still for years at a time.
If you are doing systems on small cores (Maybe a dozen kB of flash and a few kB of RAM), as I see it your choices are C and assembler, with maybe a very stripped down C++ as a third contender, what else is there that will actually let me deal with the memory mapped peripherals in a sane way?
Go and Rust will be taken seriously when there is a defined language that does not change every other month, and when there are a few different compilers implementing that defined language (Also a defined ABI for the common platforms would be nice).
Incidentally, GCs are right out! Reference count if you must, but garbage collection does bad things to realtime code (Yes I know bounded GCs exist, comment still usually stands with real implementations).
Personally, I favour the old embedded guys approach, figure out how many of what size at compile time and statically allocate the lot! You can still run off the end, but there will be no 'use after free' if you never free anything!
For date and time stuff, "Calendrical Calculations" is still my goto reference, but mixing UTC and well **Anything else** is just always going to be a source of pain.
Regards, Dan.
-
#288 Reply
Posted by
bd139
on 10 Jan, 2018 14:46
-
Can't disagree with anything there
-
#289 Reply
Posted by
paulca
on 10 Jan, 2018 14:48
-
Incidentally, GCs are right out! Reference count if you must, but garbage collection does bad things to realtime code (Yes I know bounded GCs exist, comment still usually stands with real implementations).
Personally, I favour the old embedded guys approach, figure out how many of what size at compile time and statically allocate the lot! You can still run off the end, but there will be no 'use after free' if you never free anything!
In the stock exchange (order entry gateways where customers would complain if we breached 100uS wire to wire) we would pre-allocate everything. We are talking about 60Gb of pre-allocation. C++ abstractions where banned in most places. Even function calls where frown upon in some places. Loops where a matter of "find someone elses and add you code there".
There was a legacy Java version and while we measured our latency in microseconds (which I achieved a sub micro-second "New Order" message I was proud of), the Java guys measured in milliseconds. We were averaging 50-80 micros they were averaging 10-20 millis. But we had few outlyers and those caused by TCP reorders and stuff like Nagles not being configured. Java had them all over the place measuring up to 1000 miliseconds while it resized a hash map or did some large scale garbage collection.
-
#290 Reply
Posted by
bd139
on 10 Jan, 2018 14:57
-
Why the hell you'd do that with Java I don't know. That's definitely real time territory.
Our guarantees are merely 100ms RTT 95th percentile. Than again most of our messages are fucking XML so that has it's own can of worms
-
#291 Reply
Posted by
paulca
on 10 Jan, 2018 15:06
-
Why the hell you'd do that with Java I don't know. That's definitely real time territory.
Our guarantees are merely 100ms RTT 95th percentile. Than again most of our messages are fucking XML so that has it's own can of worms
We had a guy show up to an interview from the Java space. He told us a story about his previous project and said it was an XML message gateway. We asked how many messages per second. He looked at us uncertainly and said, about 40 a minute with a questioning inclination.
We smiled affectionately and said, "We do 20,000 per second per session and we support 20 sessions per gateway."
His mouth dropped.
-
#292 Reply
Posted by
bd139
on 10 Jan, 2018 15:15
-
We interview them occasionally as well. They're the sort of people who you ask what a profiler is and they think it's an attachment for their clippers.
Our front office stuff handles about 20,000 requests a second. And that's all shitty code written by the lowest bidder with layers and layers of accumulated crap built over 20 years. Fun fun fun. Thank god I only do messaging. We had one guy almost crying because he found latency on one page was due to 22 SQL joins with an aggregate of about 50 gig of data on one endpoint
-
#293 Reply
Posted by
paulca
on 10 Jan, 2018 15:20
-
SQL
Standard Quota of Latency
-
#294 Reply
Posted by
cdev
on 10 Jan, 2018 15:23
-
How do you optimize performance on servers, especially when it comes to non-uniform memory access? (NUMA)
-
#295 Reply
Posted by
bd139
on 10 Jan, 2018 15:28
-
SQL
Standard Quota of Latency
Yep that’s about it.
Or Shit Queries Lock which is fixed by Dickheads Bearing Attitudes.
Today’s exploration is Erlang and Riak which is fun. That goes like the clappers. Learning me some erlang for great good.
How do you optimize performance on servers, especially when it comes to non-uniform memory access? (NUMA)
One avoids the fuck out of NUMA architectures if you can.
-
#296 Reply
Posted by
dmills
on 10 Jan, 2018 15:40
-
And failing that, profile the hell out of it **ON THE EXACT HARDWARE AND OS AND OTHER CODE YOU WILL BE USING!**
Regards, Dan.
-
#297 Reply
Posted by
Old Don
on 10 Jan, 2018 16:40
-
Warning Will Robertson, your self driving car is equipped with Takata air bags and Intel microprocessors. What could go wrong!
-
#298 Reply
Posted by
Bicurico
on 10 Jan, 2018 16:54
-
I have not read this whole thread, so excuse me if this thought has come up already, I think it hasn't.
I have a major question:
How much is this really a "bug" and Intel's fault?
I have read the technical explanation on the Raspberry Pi website (very well done) and to me it seems that the problem is not a "fault" but a "consequence" of trying to implement predicted branching, out of order processing, etc.
One could decide for security and switch it off, at the cost of less performance.
But it seems to me like with a car: you can turn off all driving assistance features and the car drives faster, doesn't reduce power on drifts, etc., but at the expense of less security.
How is Intel to blame for this?
I am not arguing in favour of Intel (or other chip manufacturers), I am just asking.
It would be a bug, if after a predictive branching you could issue 3 consecutive NOP commands that would dump memory pages to user space. That would be a bug.
But programming a routine that deliberatly will fool the predictive branching, making it wrong to then allocate the previously used memory blocks of discarded operations and read them out, does not necessarily seem like a bug to me?
Also, how probable is it to actually get to a memory page that contains usefull data? How would you then analyse binary data? Like getting the page that contains half the cryto string? How would you know which part it is? This is something I cannot visualize in my mind. Would be nice if someone could explain how these new "bugs" could in PRACTICE be expoited.
Mind me, I am not a low level programmer, nor am I too familiar with CPU architecture, hence why I have this question.
Also, how could you avoid this kind of attacks and still have all these speed optimizations?
Thanks,
Vitor
-
#299 Reply
Posted by
paulca
on 10 Jan, 2018 17:04
-
How much is this really a "bug" and Intel's fault?
Simply because address space isolation is the bastion of multi-program, multi-user systems for the past 30 years. It is meant to be absolutely impossibly for one program to read another's private memory. The processor's architecture is meant to prevent this at a hardware level. The bug means there is a way past this.
-
#300 Reply
Posted by
Bicurico
on 10 Jan, 2018 17:22
-
What I meant is: can the "bug" be prevented on a CPU that is using out-of-order processor, branch prediction, speculation, cache and side channel?
Is it possible to have address space isolation with branch prediction and out of order processing?
As far as I have read, the only CPU's that don't suffer from this "bug" are the ones that do not feature branch prediction and out of order processing.
Again, I am genuinely asking because I don't know the answer.
In my mind (oh God, I am listening to The Smiths - Heaven Knows I'm Miserable Now and was starting the sentece writing along...), which has a simplistic representation of how the CPU works, the only secur way I could imagine would be a cache cleaning routine, that would wipe any memory block that had to be dismissed, before allowing any other process to use it. I don't know how that would hit performance and if this would even make sense...
Regards,
Vitor
-
#301 Reply
Posted by
dmills
on 10 Jan, 2018 17:46
-
Such a cleaning routine would expose you the other way.... Now I am timing to see if my accessible cache line has been evicted from the cache by the speculative load!
Having the speculative load check the memory safety of BOTH branches before executing the cache load (And falling back on a pipeline stall followed by an in order load if the MMU reports that both addresses are not safe) looks like it would work, but would move the problem to the TLB (You would need to fall back it the TLB does not contain both possible target pages rather then loading a TLB entry!).
You would still have a small window between changing the MMU permissions and the speculative load that would need careful consideration as there might be a window of a few instructions there that could be exploited.
This is a non issue if you take care to ensure that you control the jobs running on your machines, which of course sucks for the cloud providers (My heart bleeds, bleeds I tell you!), and probably argues that we should NOT have ended up with turing complete web browsers!
I am surprised that folks are surprised by this, cache side channel attacks are a common and popular game for breaking badly written crypto and out of order execution and its interaction with the cache is about as hard to reason about as crypto implementations.
Regards, Dan.
-
#302 Reply
Posted by
SeanB
on 10 Jan, 2018 17:59
-
Basically this was known about years ago, as an erratum in a datasheet about the superscalar architecture, having variable latency in branching, due to the speculative code execution, branch prediction and other fun things to do with caching. Due to the CPU being so fast that waiting for the slow ( to the CPU) L2 cache to respond to a memory access request would involve 100 or more clock cycles that could otherwise be used, and the L2 cache similarly would have a 1000 or more cycle ( to the processor all of eternity plus some more in waiting time for the first byte to come through, then again an eternity for the rest of the byte) to main memory for data. Thus the need to use that otherwise unused cycle time, first by having a prediction algorithm to do the OOO execution, the predictive branching and the speculative execution in the waiting time, and then having extra cache space and controllers to handle all the data that came with it before it was discarded, and then seeing that a separate set of those cache blocks and some logic meant you could have a virtual processor to use the time that L1 was stalled waiting for L2 or main to respond, thus you could create hyperthreads in the same silicon with minimal overhead in most cases.
All this means that execution times per instruction depend on the other things around, and this was considered an annoyance as it prevented simple loops from being a good time ( as before on older X86 code with predictable number of cycle execution times and thus a known time to do a loop) standard. then just recently somebody took a look at that and thought that if the timing depends on what happens around the thread then there could be information leaking out of there. Thus Spectre and Meltdown, and previously Rowhammer where they thought about that old bug in memory, of bounce being an issue with memory cells if there was enough noise induced into a cell, to cause local reference rails to rise enough to cause a flipped bit in adjacent cells of memory.
-
#303 Reply
Posted by
David Hess
on 11 Jan, 2018 03:18
-
Simply because address space isolation is the bastion of multi-program, multi-user systems for the past 30 years. It is meant to be absolutely impossibly for one program to read another's private memory. The processor's architecture is meant to prevent this at a hardware level. The bug means there is a way past this.
IBM should have known better but I saw a list of vulnerable processors which included theirs. I think their Z series was on it.
What I meant is: can the "bug" be prevented on a CPU that is using out-of-order processor, branch prediction, speculation, cache and side channel?
Is it possible to have address space isolation with branch prediction and out of order processing?
As far as I have read, the only CPU's that don't suffer from this "bug" are the ones that do not feature branch prediction and out of order processing.
The thing in common with the processors which are vulnerable to Meltdown is that permission checks occur at instruction retirement which makes sense because that is where instruction faults must be resolved. By definition, an instruction fault during speculation is irrelevant unless that side of the branch is taken which is why the exploit can take advantage of speculative instruction faults without causing an actual instruction fault which would be acted on.
All that is necessary to prevent Meltdown is an earlier permission check which either blocks speculative loads entirely or blocks the speculatively loaded data from being operated on during speculative instructions. AMD apparently does this by invalidating (but not flushing) TLB entries on CR3 register changes which yields the benefits of the software workaround without the performance penalty and testing permissions of the speculative load before instruction retirement.
Having the speculative load check the memory safety of BOTH branches before executing the cache load (And falling back on a pipeline stall followed by an in order load if the MMU reports that both addresses are not safe) looks like it would work, but would move the problem to the TLB (You would need to fall back it the TLB does not contain both possible target pages rather then loading a TLB entry!).
There are no current processors which speculatively execute both sides of a branch which Wikipedia calls "eager execution". If they did, then branch prediction would not be necessary because every branch would be automatically predicted correctly 100% of the time in retrospect.
-
#304 Reply
Posted by
BrianHG
on 11 Jan, 2018 03:36
-
LOL, I think the Motorola 68040 back in the day already had branch speculation caching as well. Though, I don't believe it had the processing power with high resolution timers to take advantage of the flaw in the same way today's cpus can.
-
#305 Reply
Posted by
BravoV
on 11 Jan, 2018 03:46
-
Say there is a bad guy, and about to do a crime, also he worked as a programmer in a company that he has the know how the application's internal work that is used by the company.
Its running at an external hosting service to serve all their back office activities, say like general ledger, customers information or hell, even billing.
Now, once the guy quit the company, assuming he can order the same hosting service running at the "exact" same host as the company is using, and then do the snooping say all the customers info and etc and sell it to the company's competitor.
Does this bug traceable if it happened ? I mean in system log ?
-
#306 Reply
Posted by
David Hess
on 11 Jan, 2018 04:12
-
Does this bug traceable if it happened ? I mean in system log ?
The data leak occurs through speculated instructions which are never retired as part of the visible instruction stream. This is why monitoring for access violations will not reveal anything. As far as the CPU is concerned, they never happened.
If someone profiled the code they might wonder what it was doing; it might not seem to be getting anything done while using a lot of processor cycles.
-
#307 Reply
Posted by
BravoV
on 11 Jan, 2018 04:16
-
Does this bug traceable if it happened ? I mean in system log ?
The data leak occurs through speculated instructions which are never retired as part of the visible instruction stream. This is why monitoring for access violations will not reveal anything. As far as the CPU is concerned, they never happened.
If someone profiled the code they might wonder what it was doing; it might not seem to be getting anything done while using a lot of processor cycles.
As I suspected, if the above scenario happened, it will be untraceable crime isn't it ?
I guess even the authority will have a real problem proofing the crime in the court.
-
#308 Reply
Posted by
David Hess
on 11 Jan, 2018 04:18
-
As I suspected, if the above scenario happened, it will be untraceable crime isn't it ?
I guess even the authority will have a real problem proofing the crime in the court.
People are usually caught through means other than technical so I doubt it will make a difference. The perpetrator would be a former employee who stole confidential information and that would be enough.
-
#309 Reply
Posted by
Mr. Scram
on 11 Jan, 2018 08:00
-
Probably the best answer to this is to do all browsing in a virtual machine.
That said, the real issue here is that this has existed since 1995 and no security expert noticed it until now. During that time it may well have been exploited by bad guys. There is no way of telling if it has.
When you think about it, if you forget to lock your house or car, the security issue doesn't arise when you realise you did so. It arises when you walk away without locking it. Likewise, assuming this vuln hasn't been exploited in over 20 years of the computer being 'left unlocked' is naive. This is the fallacy behind the idea that patching and updating makes a computer secure. It is only marginally better than a placebo.
Especially as there are thousands of similar vulns in all operating systems, still unpatched.
Linux and MacOS are in principle no better then Windows in this respect, since they are all based on C, whose buffer overflow risk is the No1 cause of vulns. What is really needed is a completely new OS that ditches all of this bad code.
The decision to use C instead of Pascal for the IBM PC, must rank as the single worst decision in the entire history of computing. That said, the really inexcusable thing is that C was allowed to become entrenched in the IT industry even after its security flaws became apparent. So deeply entrenched that it's now going to take the IT equivalent of D-Day to oust it.
Virtual machines or sandboxing aren't effective in this case. That's pretty much the cause of all the consternation. Normally, you could assume that code run in a sandbox or VM could only touch its own userspace. Now it turns out that it could very well read data outside of its own area, breaking the barriers we rely upon for security. Data can leak between user and kernel, sandbox and kernel or VM and another VM.
Even though it's a complex attack, there's no doubt that malware makers are working on weaponizing it as we speak. It's likely that it will then be sold off to anyone willing to pay in a convenient package, so the smaller fish don't have to develop the complicated software themselves. Malware has unfortunately become a proper business and people have deep pockets to invest in new ways to make our lives a bit more difficult.
-
#310 Reply
Posted by
dmills
on 11 Jan, 2018 09:32
-
but would move the problem to the TLB (You would need to fall back it the TLB does not contain both possible target pages rather then loading a TLB entry!).
There are no current processors which speculatively execute both sides of a branch which Wikipedia calls "eager execution". If they did, then branch prediction would not be necessary because every branch would be automatically predicted correctly 100% of the time in retrospect.
That is not quite what I was getting at, the question is not one of executing both sides of a branch, but "Could both conditions of this branch execute without causing a cache or TLB change before retirement?", execution has nothing to do with it, the question is will either condition of this branch change the cache or TLB state in a way that differs between the two branches.
Of course thinking about it, even this does not really do it, because I can still use the timing differences between the speculation ok (Both things are in cache and TLB) case and the no speculation because one of these things is not in the cache or TLB to extract information, it is just another level of indirection!
Regards, Dan.
-
#311 Reply
Posted by
Decoman
on 15 Jan, 2018 15:59
-
And, somewhat related (to Intel and computer security):
"INTEL AMT SECURITY ISSUE LETS ATTACKERS BYPASS LOGIN CREDENTIALS IN CORPORATE LAPTOPS"
https://press.f-secure.com/2018/01/12/intel-amt-security-issue-lets-attackers-bypass-login-credentials-in-corporate-laptops/"Intel AMT is a solution for remote access monitoring and maintenance of corporate-grade personal computers, created to allow IT departments or managed service providers to better control their device fleets. The technology, which is commonly found in corporate laptops, has been called out for security weaknesses in the past, but the pure simplicity of exploiting this particular issue sets it apart from previous instances. The weakness can be exploited in mere seconds without a single line of code."
"To exploit this, all an attacker needs to do is reboot or power up the target machine and press CTRL-P during bootup."
"Although the initial attack requires physical access, Sintonen explained that the speed with which it can be carried out makes it easily exploitable in a so-called “evil maid” scenario. “You leave your laptop in your hotel room while you go out for a drink. The attacker breaks into your room and configures your laptop in less than a minute, and now he or she can access your desktop when you use your laptop in the hotel WLAN."
"The issue affects most, if not all laptops that support Intel Management Engine / Intel AMT. It is unrelated to the recently disclosed Spectre and Meltdown vulnerabilities."
-
#312 Reply
Posted by
bd139
on 15 Jan, 2018 16:22
-
This should be turned off by default. First thing we do is kill AMT on our laptops fortunately.
-
#313 Reply
Posted by
stj
on 15 Jan, 2018 22:48
-
-
#314 Reply
Posted by
cdev
on 16 Jan, 2018 00:02
-
The system is broken.
-
#315 Reply
Posted by
stj
on 16 Jan, 2018 00:40
-
The system is broken.
or the system is by design.
-
#316 Reply
Posted by
timb
on 16 Jan, 2018 00:40
-
The system is broken.
The BIOS’s closed.
The can’o’worms open.
Hackers ain’t got nothing to lose, they rollin’.
So good night cruel world, I’ll see you in the morning.
-
#317 Reply
Posted by
paulca
on 16 Jan, 2018 08:11
-
This may sound like paranoia but trust me it's not. Governments have been building covert snooping backdoors into teleco hardware for years. Why not domestic hardware.
-
#318 Reply
Posted by
bd139
on 16 Jan, 2018 08:51
-
-
#319 Reply
Posted by
Jeroen3
on 16 Jan, 2018 09:06
-
There are cloud managed routers by Cicso. They don't even need to intercept the package.
-
#320 Reply
Posted by
Marco
on 16 Jan, 2018 09:13
-
If you don't have your own industry to manufacture your own telecom equipment you don't have security.
China has it right ... hell, as an European company scared of industrial espionage I'd trust Huawei over Cisco, they are more desperate for approval. The US is just so blatant in it's total disregard for its "allies", especially the non 5 eyes ones.
-
#321 Reply
Posted by
Jeroen3
on 16 Jan, 2018 09:45
-
There are a few European network gear companies though, Nokia, Mikrotik, AVM... Not enough.
-
#322 Reply
Posted by
bd139
on 16 Jan, 2018 09:51
-
I certainly wouldn't trust Huawei. NSA are targeting their hardware for implants.
Best approach for security is dumb switching, intelligent nodes, TLS with PFS or SSH between all nodes and assume that your entire network is insecure. Hardware encrypted disks with keys stored in TPM slows physical attacks down. Cold booting with a TPM is difficult if not impossible. Also some thermite filled plant pots and numerous tamper switches in the rack
Any black box closed source appliance and you're SOL already otherwise. Can't win this battle with any closed source appliances. It's hard enough dealing with shit like ME / AMT.
Gonna get me a Z80 based X25 / TNC and use packet radio. That'll not have any implants in it
-
#323 Reply
Posted by
paulca
on 16 Jan, 2018 09:57
-
Indeed. Cisco implants for example:
https://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa-upgrade-factory-show-cisco-router-getting-implant/
That seems a targetted intercept and bug approach.
What I am talking about is "out of the factory" fitted with a covert channel. Not mentioning the company, but while programming a front end management tool for a domestic broadband optical "OLT" 'head end' we were told there was one more interface, but we can't see it and not even the OS developers are allowed to see it or make it's presence known in diagnostics. It is installed in hardware and the binary component that allows control over the link is provided to them. The OS is not even allowed to show it's existence never mind if it's in use, but all units shipped must have it and it must be connected to a special link while in service.
All very cloak and dagger and the designers of the system didn't know much about what it actually did, except it was expected it could tap any of the optical interfaces and thus receive all data sent and received to all 64 or so connected premises on that port.
This was circa 2015. A US company.
-
#324 Reply
Posted by
bd139
on 16 Jan, 2018 10:12
-
Oh nice. I'd be shitposting that all over the Internet if I got my hands on it.
Every nefarious fucker out there doing this sort of shit needs to watch their companies burn.
-
#325 Reply
Posted by
Marco
on 16 Jan, 2018 13:40
-
These implants still need information from the manufacturers, they need to be indistuingishable after all and they don't want to spend months after each new model without intercept capability.
Really the interception is mostly to remove liability from Cisco.
-
#326 Reply
Posted by
MT
on 16 Jan, 2018 19:32
-
Libtards; Russians infiltrated the election! Trumptards; No it was NSA!
NSA dont even have to do implants, just do speculative branching!
My Huawai smartpone is doing smart things , not for me but for Chinese GOV and NSA!
My bank says i have zero money on my account, i show then my draft there was a million yesterday, i accuse them for fraud they deny, i sue, then what? banks and NSA in conspiracy with US and Russian Oligarchs.
-
#327 Reply
Posted by
cdev
on 16 Jan, 2018 20:11
-
Performance isn't that important in the grand scheme of things, so what, the 10-30% performance hit is basically a few months worth of progress..
OTOH, people's privacy and security is really important.
Lots of really sleazy people out there.
-
#328 Reply
Posted by
TerraHertz
on 24 Jan, 2018 07:01
-
https://www.rt.com/news/416712-intel-bug-fix-problems/‘WTF is going on?!’ Linux creator attacks Intel as it retracts ‘garbage’ fix for critical bug
"As it is, the patches are COMPLETE AND UTTER GARBAGE," Torvalds said in a message posted to the Linux kernel mailing list on Sunday.
"All of this is pure garbage. Is Intel really planning on making this sh*t architectural?" he asked. "Has anybody talked to them and told them they are f*cking insane? Please, any Intel engineers here - talk to your managers."]
Torvalds said that the best possible solutions for the company would be to recall two decades worth of products and to give everyone free CPUs. But instead, Intel is trying to avoid huge losses and further damage to its reputation, and intends to continue shipping flawed hardware with software protection which will be turned off by default, he explained.
Actually it's about 15 years, not two decades. But who's counting?
I'm loving this drama. I've long felt that CPUs were getting far too complex, and all the out of order and speculative execution, combined with multi-level caches, exploding combinatorial complexity, would eventually bite back. Not to mention enjoying seeing Karma finally come to a company that spends so much of it's time embedding entire hidden system architectures (the IME, running Minix, with full access to TCP/IP) that were always obviously intended as intelligence/gov backdoors invisible to users. And are now _known_ to be for that, with other justifications just shallow excuses. Burn, Intel, burn.
-
#329 Reply
Posted by
nfmax
on 24 Jan, 2018 09:51
-
Breaking news!
Forty years of steadily increasing processor power come to a sudden halt - Moore later
-
#330 Reply
Posted by
Jeroen3
on 28 Sep, 2018 08:45
-
You found a memory leak in firefox? Exciting... or not...
What did you expect a "free unused memory" addon was going to do? Free memory means you've paid too much.