In light of the recent announcement of potential vulnerabilities in Ryzen processors, two stories have emerged. Firstly, that AMD processors could have secondary vulnerabilities in the secure processor and ASMedia chipsets. The second story is behind the company that released the report, CTS-Labs, the approach they have about this disclosure, and the background of this previously unknown security focused outfit – and their intentions as well as their corporate structure. Depending on the angle you take in the technology industry, either as a security expert, a company, the press, or a consumer, one of these stories should interest you.

To make it clear, the two stories boil down to these questions:

One What are the vulnerabilities, how bad are they, and what can be done?
Two Who are CTS-Labs, how come their approach to reasonable responsible disclosure differs to other security firms, how come a number of elements about the disclosure are atypical of a security firm, what is their financial model, and who are their clients?

In our analysis of the initial announcement, we took time to look at what information we had on the flaws, as well as identifying the number of key features about CTS-Labs that did not fit our standard view of a responsible disclosure as well as a few points on Twitter that did not seem to add up. Since then, we have approached a number of experts in the field, a number of companies involved, and attempted to drill down into the parts of the story that are not so completely obvious. I must thank the readers that reached out to me over email and through Twitter that have helped immensely in getting to the bottom of what we are dealing with.

On the back of this, CTS-Labs has been performing a number of press interviews, leading to articles such as this at our sister site, Tom’s Hardware. CTS reached out to us as well, however a number of factors led to delaying the call. Eventually we found a time to suit everyone. It was confirmed in advance that everyone was happy the call was recorded for transcription purposes.

Joining me on the call was David Kanter, a long-time friend of AnandTech, semiconductor industry consultant, and owner of Real World Technologies. From CTS-Labs, we were speaking with Ido Li On, CEO, and Yaron Luk-Zilberman, CFO.


Ian Cutress
AnandTech


David Kanter
RealWorldTech

 


Ido Li On
CEO, CTS-Labs


Yaron Luk-Zilberman
CFO, CTS-Labs

The text here was transcribed from the recorded call. Some superfluous/irrelevant commentary has been omitted, with the wording tidied a little to be readable.

This text is being provided as-is, with minor commentary at the end. There is a substantial amount of interesting detail to pick through. We try to tackle both of the sides of the story in our questioning.

 

IC: Who are CTS-Labs, and how did the company start? What are the backgrounds of the employees?

YLZ: We are three co-founders, graduates of a unit called 8200 in Israel, a technological unit of intelligence. We have a background in security, and two of the co-founders have spent most of their careers in cyber-security and working as consultants for the industry performing security audits for financial institutions and defense organizations and so on. My background is in the financial industry, but I also have a technological background as well.

We came together in the beginning of 2017 to start this company, whose focus was to be in hardware in cyber security. As you guys probably know, this is frontier/niche now that most of the low-hanging fruit in software has been picked up. So this is where the game is moving, we think at least. The goal of the company is to provide security audits, and to deliver reports to our clients on the security of those points.

This is our first major publication. Mostly we do not go public with our results, we just deliver our results to our customers. I should say very importantly that we never deliver the vulnerabilities themselves that we find, or the flaws, to a customer to whom the product does not belong. In other words, if you come to us with a request for an audit of your own, we will give you the code and the proof-of-concepts, but if you want us to audit someone else’s product, even as a consumer of a product or a competitor’s product, or a financial institution, we will not give you the actual code – we will only describe to you the flaw that we find.

This is our business model. This time around in this project, we started with ASMedia, and as you probably know the story moved to AMD as they imported the ASMedia technology into their chipset. Having studied one we started studying the other. This became a very large and important project so we decided we were going to go public with the report. That is what has brought is here.

IC: You said that you do not provide flaws to companies that are not the manufacturer of what you are testing. Does that mean that your initial ASMedia research was done with ASMedia as a customer?

ILO: No. So we can audit a product that the manufacturer of the product orders from us, or that somebody else such as a consumer or a third interested party audits from us and then we will provide the part of the description about the vulnerabilities much like our whitepaper but without the technical details to actually implement the exploit.

Actually ASMedia was a test project, as we’re engaged in many projects, and we were looking into their equipment and that’s how it started.

 

IC: Have you, either professionally or as a hobby, published exploits before?

ILO: No we have not. That being said, we have been working in this industry for a very long time as we have done security audits for companies, found vulnerabilities, and given that information to the companies as part of consultancy agreements but we have never actually went public with any of those vulnerabilities.

 

IC: What response have you had from AMD?

ILO: We got the email today to say they were looking into it.

DK: If you are not providing Proof of Concept (PoC) to a customer, or technical details of an exploit, with a way to reproduce it, how are you validating your findings?

YLZ: After we do our validation internally, we take a third party validator to look into our findings. In this case it was Trail of Bits, if you are familiar with them. We gave them full code, full proof of concept with instructions to execute, and they have verified every single claim that we have provided to them. They have gone public with this as well.

In addition to that, In this case we also sent our code to AMD, and then Microsoft, HP, and Dell, the integrators and also domestic and some other security partners. So they have all the findings. We decided to not make them public. The reason here is because we believe it will take many many months for the company, even under ideal circumstances, to come out with a patch. So if we wanted inform consumers about the risks that they have on the product, we just couldn’t afford in our minds to not make the details public.

 

DK: Even when the security team has a good relationship with a company who has a product with a potential vulnerability, simply verifying a security a hole can take a couple of days at least. For example, with the code provided with Spectre, a security focused outsider could look at the code and make educated guesses within a few minutes to the validity of the claim.

ILO: What we’ve done is this. We have found thirteen vulnerabilities, and we wrote a technical write up on each one of those vulnerabilities with code snippets how they work exactly. We have also produced working PoC exploits for each one of the vulnerabilities so you can actually exploit each one of them. And we have also produced very detailed tutorials on how to run the exploits on test hardware step-by-step to get all the results that we have been able to produce here in the lab. We documented it so well that when we gave it to Trail of Bits, they took it, and ran the procedures by themselves without talking to us and reproduced every one of the results.

We took this package of documents, procedures, and exploits, and we sent it to AMD and other security process that took Trail of Bits about 4-5 days to complete, so I am very certain that they will be able to reproduce this. Also we gave them a list of exactly what hardware to buy and instructions with all the latest BIOS updates and everything.

YLZ: We faced the problems – how do we make a third party validator not just sit there and say ‘this thing works’ but actually do it themselves without us contacting them. We had to write a details manual, a step-by-step kind of thing. So we gave it to them, and Trail of Bits came back to us in five days. I think that the guys we sent it to are definitely able to do it within that time frame

IC: Can you confirm that money changes hands with Trail of Bits?

(This was publicly confirmed by Dan Guido earlier, stating that they were expecting to look at one test out of curiosity, but 13 came through so they invoiced CTS for the work. Reuters reports that a $16000 payment was made as ToB’s verification fee for third-party vulnerability checking)

YLZ: I would rather not make any comments about money transactions and things of that nature. You are free to ask Trail of Bits.

 

IC: The standard procedure for vulnerability disclosure is to have a CVE filing and a Mitre numbers. We have seen in the public disclosures, even 0-day and 1-day public disclosures, have relevant CVE IDs. Can you describe why you haven’t in this case?

ILO: We have submitted everything we have to US Cert and we are still waiting to hear back from them.

IC: Can you elaborate as to why you did not wait for those numbers to come through before going live?

ILO: It’s our first time around. We haven’t – I guess we should have – this really is our first rodeo.

 

IC: Have you been I contact with ARM or Trustonic about some of these details?

ILO: We have not, and to be honest with you I don’t really think it is their problem. So AMD uses Trustonic t-Base as the base for their firmware on Ryzen processors. But they have built quite a bit of code on top of it and in that code are security vulnerabilities that don’t have much to do with Trustonic t-Base. So we really don’t have anything to say about T-Base.

IC: As some of these attacks go through TrustZone, an Arm Cortex A5, and the ASMedia chipsets, can you speak about other products with these features can also be affected?

ILO: I think that the vulnerabilities found are very much … Actually let us split this up between the processor and the chipset as these are very different. 

For the secure processor, AMD built quite a thick layer on Trustonic t-Base. They added many features and they also added a lot of features that break the isolation between process running on top of t-Base. So there are a bunch of vulnerabilities there that are not from Trustonic. In that respect we have no reason to believe that we would find these issues on any other product that is not AMDs.

Regarding the chipset, there you actually have vulnerabilities that affect a range of products. Because as we explained earlier, we just looked first at AMD by looking at ASMedia chips. Specifically we were looking into several lines of chips, one of them is the USB host controller from ASMedia. We’re talking about ASM1042, ASM1142, and the recently released ASM1143. These are USB host controllers that you put on the motherboard and they connect on one side with PCIe and on the other side they give you some USB ports.

What we found are these backdoors that we have been describing that come built into the chips – there are two sets of backdoors, hardware backdoors and software backdoors, and we implemented clients for those backdoors. The client works on AMD Ryzen machines but it also works on any machine that has these ASMedia chipsets and so quite a few motherboards and other PCs are affected by these vulnerabilities as well. If you search online for motherboard drivers, such as the ASUS website, and download ASMedia drivers for your motherboard, then those motherboards are likely vulnerable to the same issues as you would find on the AMD chipset. We have verified this on at least six vendor motherboards, mostly the Taiwanese manufacturers. So yeah, those products are affected.

 

IC: On the website, CTS-Labs states that the 0-day/1-day way of public disclosure is better than the 90-day responsible disclosure period commonly practiced in the security industry. Do you have any evidence to say that the paradigm you are pursuing with this disclosure is any better?

YLZ: I think there are pros and cons to both methods. I don’t think that it is a simple question. I think that the advantage of the 30 to 90 days of course is that it provides an opportunity for the vendor to consider the problem, comment on the problem, and provide potential mitigations against it. This is not lost on us.

On the other hand, I think that it also gives the vendors a lot of control on how it wants to address these vulnerabilities and they can first deal with the problem then come out with their own PR about the problem, I’m speaking generally and not about AMD in particular here, and in general they attempt to minimize the significance. If the problem is indicative of a widespread issue, as is the case with the AMD processors, then the company will company probably would want to minimize it and to play it down.

The second problem is that if mitigations are not available in the relevant timespan, this paradigm does not make much sense. You know we were talking to experts about the potential threat to these issues, and some of them are in the logic segment, ASICs, and so there is no obvious direct patch that can be developed for a workaround. This may or may not be available. Then the other one requires issuing a patch in the firmware and then going through the QA process, and typically when it comes to processors, QA is a multi-month process.

I estimate it will be many many months before AMD is able to patch these things. If we had said to them, let’s say, ‘you guys have 30 days/90 days to do this’ I don’t think it would matter very much and it would still be irresponsible on our part to come out after the period and release the vulnerabilities into the open.

So basically the choice that we were facing in this case was either we not tell the public and let the company fix it possibly and only then give it to the public and disclose, and in this circumstance we would have to wait, in our estimate, as much as a year, meanwhile everyone is using the flawed product. Or alternatively we never disclose the vulnerabilities, give it to the company, and then disclose at the same time we are giving it to the company so that the customers are aware of the risks of those products and can decide whether to buy and use them, and so on.

In this case we decided that the second option is the more responsible one, but I would not* say that in every case that this is the better method. But that is my opinion. Maybe Ilia (CTO) has a slightly different take on that. But these are my concerns.

*Editor's Note: In our original posting, we missed out the 'not' which negates the tone of this sentence. Analysis and commentary have been updated as a result.  

IC: Would it be fair to say that you felt that AMD would not be able to mitigate these issues within a reasonable time frame, therefore you went ahead and made them public?

YLZ: I think that is a very fair statement. I would add that we saw that it was big enough of an issue for the consumer had the right to know about them.

IC: Say, for example, CTS-Labs were in charge of finding Meltdown and Spectre, you would have also followed the same path of logic?

YLZ: I think that it would have depended on the circumstances of how we found it, how exploitable it was, how reproducible it was. I am not sure it would be the case. Every situation I think is specific.

 

DK: How are you absolutely sure that these issues cannot already be rectified in hardware? There are plenty of external chip designers will not be able to tell you with any degree of certainty what can or cannot be rectified through microcode or through patches, though undocumented register flips etc. Who were the chip design experts that you spoke to, and how confident are you that their assessment was correct?

ILO: Let us start by saying that everything we have said are our own estimates based on talks that we have had with people in the semiconductor industry with chip designers and so forth. But these are estimates. We cannot say for certain how long it will take them to patch it or to find a workaround, or if a workaround is possible. I know exactly what you are talking about, that those chips could have undocumented features, and if you flip a bit in a register to enable some hidden feature or something that the engineers left behind on the design, that can help you to turn certain parts of the chip on and off. Maybe you can do that and those registers are very often than not very well documented even within the company itself. I know about this stuff, and everything that I am doing is our own estimates, it may be incorrect, we don’t really know or understand [trails off and doesn’t finish sentence]. Yeah.

In any case, the first thing I can say about this is that ASMedia produces ASICs. So I am fairly certain, based on everything we have read and the research we have done, that this is not an FPGA chip so they can’t just patch it with FPGA updates. I do not know if they have hidden features that would enable them to disable those features and I guess it is up to them to tell us.

YLZ: I think that it is up to them to announce what kind of workaround is available and how costly it will be in terms of disabling features and in terms of performance or whatnot. I think that as you have a hardware level flaw then it is a serious issue.

Regarding how long it would take, we have spoken to experts involved in the QA process in the semiconductor industry and we received a virtually unanimous response that the QA process is the longest part of the patching process. The patching itself may be simple or difficult but the QA process takes a long time.

In fact the one vulnerability that came out with AMD that was a lower level vulnerability that came out about 3 months ago and I believe they still have not come out with a patch. And now we are talking about 13 of them.

ILO: I want to correct that, they did come out with a patch. It took them over three months. It was announced at the end of September, and the new version of AGESA containing the patch came out mid-January, and you have to go through the process of rolling out the new version of AGESA to different motherboard manufacturers and they have to add to their own QA process for the updates and that process might still be rolling two months after the patch came out. It’s a very long process.

But we can’t say with precision how long it will take AMD to come out with patches but we feel confident that it will take months and not weeks.

 

IC: How many security researchers did you disclose with before going public?

YLZ: You mean the technical details in full? Trail of Bits was the only external party and then afterwards together with the company we disclosed to Microsoft, HP, Dell, Symantec, FireEye, and CrowdStrike. They have the whole she-bang.

IC: Gadi Evron, the CEO of Cymmetria, has started talking on social media with knowledge of the vulnerabilities. Can you confirm you briefed them, or did they [Gadi Evron] get the details from other way?

ILO: We are in touch them, but they have not gone through the materials yet. They might decide to do that – we are going to see.

YLZ: They are a collaborating with us, so they have seen quite a bit of the findings, but unlike Trail of Bits they have not got the full information, the step-by-step.

IC: Would there be any circumstance in which you would be willing to share the details of these vulnerabilities and exploits under NDA with us?

YLZ: We would love to, but there is one quirk. According to Israel export laws, we cannot share the vulnerabilities with people outside of Israel, unless they are a company that provides mitigations to such vulnerabilities. That is why we chose the list. But look, we are interested in the validation of this – we want people to come out and give their opinion, but we are only limited to that circle of the vendors and the security companies, so that is the limitation there.

IC: Would that also prevent you from publishing them publicly?

YLZ: That is an interesting question, I haven’t even thought about that.

ILO: I think that we spoke to our lawyers, and generally, as far as I know because I am not a lawyer, but I don’t think it stops us. That being said we have no intension of publishing publicly because of these vulnerabilities to anyone outside the large security companies that can handle these.

IC: Sure, but on the website you have a table at the bottom that says that if anyone finds mitigations to these vulnerabilities to get in contact but you have not supplied any details. How do you marry the fact that you are requesting mitigations and not providing any detail for anyone to replicate the issues?

ILO: We are in touch with two large security vendors right now who have the materials and are looking into the materials and producing mitigations. As soon as they do produce them we will definitely update the website.

YLZ: I would add that we can’t assume that we are the only people who have been looking into those processors and found problems there. So what we are saying is that in addition to ourselves, if anyone has mitigations against them, we are happy to share them with the company and to receive it from individuals.

IC: Even though that not producing the details actively limits who can research the vulnerabilities?

YLZ: Yes.

There is nothing that I would love more than to have the validation from the world and you guys and everybody else. If I didn’t think that I would be (a) jeopardizing users because it is a long patching process and (b) violating a couple of laws, but yes that’s the only thing. But now we are sitting here with our fingers crossed that the companies that we gave this to, including AMD, come out with their response and accept or reject but we are confident that all of this works but we would love to hear from them.

 

Claimed Vulnerabilities
  Attacks PoC Claimed
MasterKey Secure Processor Ryzen
EPYC
Ryzen Pro
Ryzen Mobile
Chimera Promontory +
ASMedia Controllers
Ryzen
Ryzen Pro
-
Ryzenfall Secure OS Ryzen
Ryzen Pro
-
Fallout Secure Boot Loader EPYC -

IC: It was stated that, and I quote, that ‘this is probably as bad as it gets in the world of security’. These vulnerabilities are secondary attack vectors and require admin level access and they also do not work in virtualized environments (because you can’t update a BIOS or chip firmware from a virtual machine) without having metal access which is typically impossible in a VM environment. What makes these worse than primary level exploits that give admin access?

ILO: I think that this is an important question. I will give you my opinion. I think that the idea that this requires local admin privileges that it doesn’t matter in a sense because the vector already has the access to the files. What I think that is particularly bad about this secondary attack is that it lets put malware in hardware, such as the secure processor, which has the highest privileges in the system. You are sitting there and you can get to all memory sectors and so from there you can stay undetected by antiviruses, and if the user reinstalls the operating system or formats the hard-drive you still stay there.

So if you think about attaching that to the routine attack, a primary attack, this thing can let an attacker stay there and conduct espionage and sit there indefinitely. Now put yourself in the shoes of a person who discovers that an attack was using this tool and they need to decide what to do now, so they are basically guessing which machine to throw out. That’s one I think kind of degree of severity.

The other is the lateral movement issue as you probably read in our whitepaper: the idea that you can break the virtualization of where the credentials are stored, where the Windows Credentials are in Windows 10. From this an attacker can move laterally in the network – I think it that it is obvious that one of the major barriers to lateral movement is breaking the distinction between software and hardware. If you think about this not as a private user but as an organization that is facing an attack, this is very scary stuff to think that hackers can have the tools of this kind. This is why I think the language is not hyperbole.

IC: Most enterprise level networks are built upon systems that rely on virtual machines (VMs), or use thin clients to access VMs. In this circumstance no OS has bare metal access due to the hypervisor unless the system is already compromised…

ILO: Can I stop you there? That is not correct. That is entirely incorrect. We are talking about companies. You know we have a company here – imagine you had a company with four floors with workstations for employees that run Windows and sometimes you have a domain environment on the network….

IC: Those are desktop systems, I specified enterprise.

ILO: Yeah, this is enterprise, this is a company. As I said it has four floors with computers inside. They may be running Ryzen Pro workstations. They may have a Microsoft Windows Domain server, maybe a file server, and what we are talking about here is lateral movement inside corporate networks like this one. This is ABC, this is what happens on TSX all over the world with reports about how Chinese hackers behave when they hack US companies and this is how it looks like.

IC: What do you suppose the market penetration is of Ryzen based corporate work deployments?

ILO: Well you know they are trying to push hard into this market right now but my own estimate - I don’t know I haven’t done the market research – but the market penetration is not very high. Hopefully it will stay this way until these issues have been resolved as it puts the network at risk.

YLZ: I think that if you look at the market penetration – I have done more market research on this – and I think that analysts are now estimating that by 2020 that AMD will have 10% worldwide server market share. That is in two years. That is quite a few computers out there.

IC: But that is server market share – based mostly on VM oriented systems.

DK: Bare metal access to servers is a different animal. If you take the deployments of Azure, even if you have root privileges, you are still running virtualized. Servers are different to desktops. 

ILO: Regarding servers, the main impact – let us say you are customer of Microsoft Azure that is integrating EPYC servers right now. You have a virtual machine on the server and that is all you have. To be honest with you, in that particular situation the vulnerabilities do not help you very much. However if a server gets compromised and the cloud provider is relying on secure virtualization to segregate customer data by encrypting memory, and someone runs an exploit on your server and breaks into the SP, they could tamper with this mechanism and this mechanism. I think this is one of main reasons to integrate EPYC servers into the data center – it is the feature that EPYC servers offer to cloud features, and that feature can be broken if someone gets access to the secure processor.

YLZ: We’re talking about the cloud specifically, rather than servers in your data center, then the secure processor can be taken over with very high privileges. So I think it is a huge detail.

YLZ: As much as it is a pleasure talking to you we have time for only a few more questions.

 

IC: Can you describe how you came up with the names for these exploits?

YLZ: It was our creativity and fervent imagination.

IC: Did you pre-brief the press before you spoke to AMD?

ILO: What do you mean by pre-brief the press?

IC: We noticed that when the information went live, some press were ready to go with relevant stories and must have had the information in advance.

ILO: Before our announcement you mean?

IC: Correct.

ILO: I would have to check the timing on that and get back to you, I do not know off the top of my head.

DK: I think the biggest question that I still have is that ultimately who originated this request for analysis – who was the customer that kicked this all off?

ILO: I definitely am not going to comment on our customers.

DK: What about the flavor of customer: is it a semiconductor company, is it someone in the industry, or is it someone outside the industry? I don’t expect you to disclose the name but the genre seems quite reasonable.

ILO: Guys I’m sorry we’re really going to need to jump off this call but feel free to follow up with any more questions.

 

[End of Call]

 

This call took place at 1:30pm ET on 3/14. After the call, we sent a series of 15 questions to CTS-Labs at 6:52pm ET on the same day. As of 7:10pm ET on 3/15, we have not had a response. These questions included elements related to

  • The use of a PR firm which is non-standard practice for this (and the PR firm were not involved in any way in our call, which is also odd),
  • Viceroy Research, a company known for shorting stock, and their 25-page blowout report published only three hours after the initial announcement,
  • And the 2018 SEC listing of the CFO as the President of NineWells Capital, a hedge fund based in New York, that has interests in equity, corporate debt investments, and emphasis on special situations.

If we get answers, we will share these with you.

Commentary and Clarification

All processors and chips have flaws – some are critical, others are simple, some can be fixed through hardware and software. There are security issues and errata in some processors that are several years old.

CTS-Labs’ reasoning for believing that AMD cannot patch within a reasonable period, coming from ‘industry experts’, seems off – almost as if they believe that when they enter a responsible disclosure time-frame, they cannot reveal the vulnerabilities until they are fixed. The response to the question about Meltdown and Spectre, if they would have the same attitude to the fact that the industry had several months before publication for a coordinated effort, means that despite offering a semi*-unilateral reasoning for 0-day disclosure, they would not apply it unilaterally but on a case-by-case basis. The language used is clearly not indicative of their actual feeling and policy.

Blaming the lack of CVE numbers on them being ‘new’ to it, yet citing repeatedly many years of security experience and in cyber-security is also in opposition. It may be their first public disclosure, even as individuals, but I find it hard to believe they were not prepared on the CVE front. Using our own experts, it can take hours for a well-known company to be issued a CVE number, or weeks for an unknown entity. Had CTS-Labs approached a big security firm, or AMD, then these could have been issued relatively easily. CTS-Labs stated that they are waiting for CVE numbers to be issued to going to be an interesting outcome, especially if they appear and/or the time of submission is provided.

It seems a bit odd for a company looking into ASMedia related flaws to then turn their focus onto AMD’s secure processor, using the chipset vulnerabilities as a pivot point. ASMedia chips, especially the USB host controllers cited by CTS-Labs, are used on literally tens of millions of Intel-based motherboards around the world, from all the major OEMs. For a large period of time, it was hard to find a system without one. The decision to pivot on newer AMD platforms is a weak argument, the wishy-washy language when discussing projects at the start of the company’s existence, and the abrupt ending to the call when asked to discuss the original customer could be construed (this is conjecture here) that the funding for the product was purposefully directional towards AMD.

The discussion about the understanding of how vulnerabilities can be mitigated certainly piqued David’s interest. The number of things that can be done through microcode, or that are undocumented to third-party chip analysts, means that it came across as highly strange that CTS-Labs were fervent in their believe that AMD could not patch the issue in reasonable time to warrant a longer reasonable disclosure period (one of many arguments used). As seen in recent vulnerabilities and responsible disclosures, chip designers have implemented a number of significant methods to enable/disable/adjust features that were not known to be able to be adjusted, such as resetting a branch predictor. All it takes is for the chip company to say ‘we can do this’ and for it to be implanted, so the use of high-impact language was certainly noted. The confusion between microcode and FPGA in the discussion also raised an eyebrow or three.

When approaching the subject of virtualized environments, the short sharp acceptance that these vulnerabilities were less of an issue in VMs and with cloud providers was quickly overshadowed by the doom and gloom message for if a system was already compromised, even when it was stated that analyst expect 10% market share for EPYC by 2020. It was clear that the definitions of enterprise deployments seemed to differ between AnandTech and CTS-Labs, again partnered with amounts of doom and gloom.

Lastly, the legal argument of not being able to share the details outside of Israel, or only to registered security companies outside of Israel, was an interesting one we were not expecting. This being coupled with the lack of knowledge on the effect of an open disclosure led us to reach out to our legal contacts that are familiar with the situation. This led to the line:

“It’s BS, no restrictions.”

Some of our contacts, and readers with security backgrounds, have privately confirmed that most of this is quite fishy. The combination of the methodology and presentation with a new company that both claims to have experience but can’t do CVE numbers is waving red flags.

Opinion

Going back to the two original questions, here is where I personally stand on the issue:

One What are the vulnerabilities, how bad are they, and what can be done?

One, if the vulnerabilities exist: It is very likely that these vulnerabilities are real. A secondary attack vector that could install monitoring software might be part of a multi-layer attack, but offering a place for indiscriminant monitoring of compromised systems can be seen as an important hole to fix. At this point, the nearest trusted source we have that these vulnerabilities are real is from Alex Ionescu, a Windows Internals Expert who works for CrowdStrike, one of the companies that CTS-Labs says has the full disclosure documents. That is still a stage a bit far from us to warrant a full confirmation. Given that Trail of Bits required 4-5 days to examine CTS-Labs work, I suspect it will take AMD a similar amount of time to do so. If that is the case, AMD might have additional statements either on Friday or Monday, either confirming or rebutting the issues, and discussing future action.

Two Who are CTS-Labs, how come their approach to reasonable responsible disclosure differs to other security firms, how come a number of elements about the disclosure are atypical of a security firm, what is their financial model, and who are their clients?

Two, the status of CTS-Labs: I’m more than willing to entertain the fact that as a first public high-level disclosure, a security company can be out of step with a few of the usual expected methods of responsible disclosure and presentation. A number of new security companies that want to make a name for themselves have to be bold and brash to get new customers, however we have never quite seen it to this extent – normally the work speaks for itself, of the security company will develop a relationship with the company with the vulnerability and earn its kudos that way. The fact that CTS-Labs went with a polished website (with nine links to download the whitepaper, compared to the Meltdown/Spectre websites that had one), and a PR firm is definitely a different take. The semi*-unilateral reasoning for a 0-day/1-day disclosure, followed by a self-rebuttal when presented with a more significant issue, shows elements of inconsistency in their immediate judgement. The lack of CVEs ready to go, despite the employees having many years of experience, as well as experience in the Israeli equivalent of the NSA in Unit 8200, does seem as opposites; an experienced security team would be ready. The swift acceptance that cloud-based systems are not vulnerable but then going straight into doom and gloom, despite the limited attack surface in that market, shows that they are focusing on the doom and gloom. The reluctance for CTS-Labs to talk about clients and funding, or previous projects, was perhaps to be expected.

The initial downside of this story coming into the news was the foreboding question of ‘is this how we are going to do security now?’.  Despite the actions of CTS and their decision to go with a 24-hour period, after speaking to long-term industry experts at high profile technology companies, a standard 90-180 day pre-disclosure period is still the primary standard that manufacturers would expect security companies to adhere with to actively engage with responsible information and verification. We were told that to go beyond/behind this structure ultimately formulates a level of distrust between the company, the security agency, and potentially the clients, regardless of the capabilities of the security researchers or the severity of the issues found; moreso if the issues are blown out of proportion in relation to their nature and attack surface.

Related Reading

Comments Locked

122 Comments

View All Comments

  • Strunf - Friday, March 16, 2018 - link

    I don't contest this all smells fishy but if these attacks can infect a PC and stay there despite OS reinstall then it's quite a thing, someone could install one of these exploits at the production of the PC and then ship the PC with the exploit already in...
  • RandSec - Friday, March 16, 2018 - link

    "if these attacks can infect a PC and stay there despite OS reinstall then it's quite a thing"

    No different, really, from any malicious BIOS re-flash. Once an attacker can run code on a target machine no security exists.
  • eva02langley - Friday, March 16, 2018 - link

    I just read the two first questions so far... and basically these guns are guns for hire. "Find dirt"... this is disgusting.
  • eva02langley - Friday, March 16, 2018 - link

    "these guys"
  • B3an - Friday, March 16, 2018 - link

    Filthy subhuman jews.
  • Carmen00 - Friday, March 16, 2018 - link

    While you're on that "user ban" screen, Ryan, just take a quick peek at the above account?

    It's amazing how this story has brought the crazies out of the woodwork.
  • B3an - Friday, March 16, 2018 - link

    Go jump off a bridge you censoring fascist.
  • watzupken - Friday, March 16, 2018 - link

    To be honest, the more information I read about this, the more I think this "Security Firm" is really dodgy. This entire fiasco appears to be trying to do harm to AMD, turns out to be affecting their reputation substantially more even if the mentioned security flaws are legit.
  • Speedfriend - Friday, March 16, 2018 - link

    I am surprised so many people think that they are going to be done for stock manipulation. Doing research and then publishing your opinion of available facts is what analysts do all the time and they are allowed to have positions in the stock provided that the fact is disclosed. Unless the claims that they have found vulnerabilities are disproven, I find it hard to believe that anything will come out of it despite the fact they did not stick to industry norms.
    And let's be realistic, those industry norms are designed to protect the industry, not users. It is interesting that if a company is the victim of a cyberattack resulting from a vulnerability, they can get charged and fined if they don't immediately tell clients, but a tech company gets 90 days to fix a vulnerability while customers are blissfully unaware they may be exposing themselves. Of course, disclosing the vulnerability would let attack potentially utilise it while a fix is being worked on, so I am not sure what the answer is.
  • Topweasel - Friday, March 16, 2018 - link

    I get what you are saying Speedfriend. The one thing I would say is that with their announcement came an article 25 freaking pages long analyzing this. Released at the same time they announced it. This means that those single guys and no one else had this information a long time before announcement. It was a financial write-up and not a technical (another red flag) and even specified a stock value (sub $1) and used the phrase "AMD will have to file for Chapter 11, to recover from these vulnerabilities".

    The fishyness on how they ended the call basically explains it. Those guys were the customers and they wanted to short sell AMD stock.

    If it was just that these guys Zero'dayed AMD it would look bad (and like someone else said if they were a real security company they just shot themselves in the foot). But it would mostly blow over. It's their presentation, the people they informed first, those guys presentation, and what the vulnerabilities actually are (all need a compromised system to become more compromised) altogether makes these guys seem like a joke/hitmen. Which in a way is sad. Sure AMD will patch the issues if their legitimate. Intel needs to as well because 90% of them apply to them as well. But it's sooooo poorly handled by CTS that even if this was Meltdown it would be hard to look past the trolling that CTS is a part of to look at the issue. They wagged the dog on themselves.

Log in

Don't have an account? Sign up now