PowerPoint, ArcaneDoor, the Z80 and Kaiser Permanente
Notable security news from the week of April 21st with a small side of nostalgia for the Z80 CPU; we'll dive into the exploitation of an old PowerPoint CVE from 2017, ArcaneDoor and the targeting of Cisco perimiter devices and an enormous breach of Kaiser Permanente user information!300Views3likes2CommentsDell & TicketMaster breaches, CVE & patch roundup and ProxyShell is back
Notable security news for the week of May 20th-26th, 2024, brought to you by the F5 Security Incident Response Team. This week, AaronJB is taking a look at breach news from Dell, a novel DNS attack technique, how threat actors still exploit old CVEs (like Exchange's ProxyShell CVE-2021-34473, CVE-2021-34523 & CVE-2021-31207), why Industrial Control Systems shouldn't be connected to the Internet and a quick round-up of vendor patches you should take a look at from Ivanti, Fortinet, TP-Link and F5. Huge breaches, still in fashion I originally had this segment planned so that I could talk about the recentDell data breach which exposed the records of 49 million customers- name,physical address, Dell order information - you know, the usual kind of information that an adversary could use to construct averyconvincing spearphishing attack (yet Dell consider low risk, apparently); but there is some late breaking news which potentially makes this breach look tiny. I'll get onto that later. The Dell breach is interesting though as it was actually achieved using one of the most basic techniques (which I thought was long since 'fixed') - web scraping. The attackers simply registered a partner account using fake company details and then used a generated list of service tags to scrape the details of every order relating to those service tags - they sent 5000 requests per minute for three weeks straight, and nobody noticed a thing. Dell Service Tags are a unique asset identifier consisting of seven alphanumeric digits - consider them a serial number - so the attackers just needed to generate every possible combination of service tag and then, one by one, request the details of the order behind that tag. Apparently, the attackerseven tried to disclose this security issue to Dell, but received no response and set about monetizing their discovery instead. It strikes me that there are so many places this could have been fixed ahead of time: Least privilege: Doeseverypartner accountreallyneed to be able to access the details ofeverypossible service tag? (I would have thought no!) Rate limiting: Does that APIreallyneed to be able to support endless requests from a single partner account? (I would have thought no!) Logging: I don't know what the base requests-per-second rate is for that API, but shouldn't there at least have been some logging happening to a central SIEM about suspicious activity? Any of the above could have stopped this attack dead - heck, I get the impression that Dell could have stopped the attack had they interacted with the original report sent to them; though perhaps the original report (in part redacted) was looking for a bounty and Dell declined to interact on such basis. Still, the published partial email does seem to indicate that the attackers provided a full PoC from the outset.. But wait, I said there was something bigger? Yes! This is late breaking and we don't have all the facts yet, but a couple of days ago posts began appearing on X suggesting thatTicketMaster had suffered a breachof1.3TBof data which included names, physical addresses, email addresses, phone numbers andthe last four digits and expiryof payment cards associated with orders - 560 million rows of data. The validity of this wasinitially questionedbut, unfortunately for us,later verified to be true; as vx-underground says: Sometime in April an unidentified Threat Group was able to get access to TicketMaster AWS instances by pivoting from a Managed Service Provider. At least this wasn't a simple web scraping attack, I suppose, but it highlights something for me: You need to beverycareful who you trust to manage your systems, becauseyoursecurity is entirely intheirhands. Meanwhile, your reputation is entirely in your hands - when and if you are breached, your customers won't come with pitchforks for your MSP, they will come foryou. This is also true of SaaS services, of course, and why SaaS companies (including ours) invest heavily in internal training, processes and patch management.. I wonder if the MSP did, in this case? Vendor patch watch It's like Spring Watch (for UK readers; for the rest of the world, that's a daytime TV show where cameras get shoved in badger sets, bird houses etc and people watch baby animals that were born in springtime), but for vulnerabilities.. Ivanti published patches forsixCritical severity vulnerabilities (plus four High)in Ivanti EPM, and a handful of other Ivanti products; if you use any of those youreallyshould patch ASAP, although none have appeared inCISA's Known Exploited Vulnerabilities (KEV) listyet. Proof of Conceptexploits were released for Fortinet's CVE-2024-23108(disclosed in January) so if you haven't patched, you absolutely must as the time from PoC availability to widespread exploitation is typically 24 hours or less. Rather unfortunately, the PoC reveals that CVE-2024-23108 is basically CVE-2023-34992 just in a different argument - still, we have all been there! TP-Link discloseda CVSS10.0 vulnerability in their Archer C5400Xgaming router and if you have one of those then you really need to patch - home routers are common targets for attackers looking to create botnets to carry out further attacks, andearlier TP-Link CVEs quickly appeared in CISA's KEV listas well asF5 Labs' Sensor Intel Serieswhich showed that CVE-2023-1389 (TP-Link Archer AX-21) wasthe most targeted vulnerabilityin March 2024! Finally, and not to be outdone, F5 had two disclosures in May; this is unusual for us as we typically coordinate our disclosures for Quarterly Security Notifications however, this month, we had both a QSN and an Out-of-Band notification affecting NGINX products. You can find the details of our May 8th QSN inK000139404and details of our NGINX-specific May 29th OOBSN inK000139628; fortunately for us the highest CVSS in our QSN was an 8.0, and in our OOBSN a 6.5 (and the OOBSN NGINX issues are all specific to QUIC, as well). As I say, we try to coordinate our disclosures for QSNs so that our customers can have a predictable cadence around which to plan updates & upgrades; we are committed to security, to working with external researchers, and to the security of the open-source community however, and in some cases we must disclose issues out of band in order to best protect and serve our customer base, and maintain the balance between transparency and security. Novel DNS attack - DNSbomb DNSbomb- I originally spotted this a couple of weeks ago, but last week it was the topic of a talk at the2024 IEEE Symposium on Security and Privacyso has had a bit more coverageandthere is now an easy-to-digestslide setavailable (with a video to follow) and even a one-pageposterfor your wall (seriously though, I actually love the idea of a one-page poster like this for new research; it's great for those of us who are attention-span challenged!). The idea behind this attack is to use a low-rate of requests from a large number of hosts to fly under the radar, but rather than simply having those hosts query the target victim, those hosts send their queries to some intermediary concentrator (which could be a recursive resolver, CDN or similar) which will queue all of those requests up and send them in one big burst to the target victim (hence the "bomb" part of the name). The technique is interesting and novel, promising a theoretical amplification factor of 20,000x or greater and a peak "bomb" in the 9Gb/s range, but I have to admit that I haven't been able to properly go through the research paper or try to replicate their findings yet. Perhaps that will be the basis of a future DCCO article! If anyone has had chance to really understand the research - or better still, was at the IEEE Symposium - I'd love to hear from you! Don't put your Industrial Control Systems on the Internet? I thought this fell under the heading of "obvious news" but apparently, not so obvious;Rockwell Automation and CISAhave "encouraged" customers to assess and secure their public internet exposed ICS assets. Personally I'm struggling to understand why Industrial Control Systems would be exposed to the Internet, ever, but perhaps I am being naieve here? Is the public internet just considered "easy connectivity" for ICS & IoT systems? Certainly a quick google for things like "water treatment plant internet" shows plenty of articles discussing IoT for waste treatment monitoring, but do you really want the gate valves separating brown water from clean being controlled by a PLC hooked up to the Internet? OK, silly example, but my point stands - industrial controls typically look after things that are mission- or human-critical; toxic waste, nuclear power stations, manufacturing plants and so on.Noneof that stuff shouldeverbe connected to the internet, and it terrifies me that Rockwell & CISA felt the need to reiterate that... One last thing.. An article about MS Exchange flaws being leveraged todeploy keyloggers in highly targetted attacks. What caught my eye there wasn't the keylogger part (although thatisneat; at least neat to see something that isn't just ransomware!) but rather that the threat actor is leveraing Exchange vulnerabilities from2021in the form of ProxyShell (CVE-2021-34473, CVE-2021-34523, and CVE-2021-31207). I talk about this often, but as an industry wehaveto get better at patching exposed systems; perhaps the problem is we simply don't know what systems are exposed, perhaps the problem is a lack of time to patch, or a lack of corporate will to suffer potential downtime and push-back by Change Advisory Boards, but whatever the problem is we really have to tackle it. I'd love to hear your stories from the front lines of patching things like ProxyShell; how long did it take, was there any fallout, management push-back etc? Ancient Exchange flaws exploited -https://thehackernews.com/2024/05/ms-exchange-server-flaws-exploited-to.html89Views5likes2CommentsGhostStripe, Sec Clearance bill, JR EAST, Vulnrichment, and Solar Storm
This weekKoichi is back as editor for another round-up of the news. This time I chose these security news: GhostStripe, Security Clearance bill, and RISS, Suspected attack on Japan Railway (JR) East, Vulnrichment; and Solar Storm.117Views2likes0CommentsCoordinated Vulnerability Disclosure: A Balanced Approach
The world of vulnerability disclosure encompasses, and affects, many different parties – security researchers, vendors, customers, consumers, and even random bystanders who may be caught in the blast radius of a given issue. The security professionals who manage disclosures must weigh many factors when considering when and what to disclose. There are risks to disclosing an issue when there is no fix yet available, possibly making more malicious actors aware of the issue when those affected have limited options. Conversely, there are also risks to not disclosing an issue for an extended period when malicious actors may already know of it, yet those affected remain blissfully unaware of their risk. This is but one factor to be considered. Researchers and Vendors The relationship between security researchers and product vendors is sometimes perceived as contentious. I’d argue that’s largely due to the exceptions that make headlines – because they’re exceptions. When some vendor tries to silence a researcher through legal action, blocking a talk at a conference, stopping a disclosure, etc., those moves make for sensational stories simply because they are unusual and extreme. And those vendors are clearly not familiar with the Streisand Effect. The reality is that security researchers and vendors work together every day, with mutual respect and professionalism. We’re all part of the security ecosystem, and, in the end, we all have the same goal – to make our digital world a safer, more secure place for everyone. As a security engineer working for a vendor, you never want to have someone point out a flaw in your product, but you’d much rather be approached by a researcher and have the opportunity to fix the vulnerability before it is exploited than to become aware of it because it was exploited. Sure, this is where someone will say that vendors should be catching the issues before the product ships, etc. In a perfect world that would be the case, but we don’t live in a perfect world. In the real world, resources are finite. Every complex product will have flaws because humans are involved. Especially products that have changed and evolved over time. No matter how much testing you do, for any product of sufficient complexity, you can never be certain that every possibility has been covered. Furthermore, many products developed 10 or 20 years ago are now being used in scenarios that could not be conceived of at the time of their design. For example, the disintegration of the corporate perimeter and the explosion of remote work has exposed security shortcomings in a wide range of enterprise technologies. As they say, hindsight is 20/20. Defects often appear obvious after they’ve been discovered but may have slipped by any number of tests and reviews previously. That is, until a security researcher brings a new way of thinking to the task and uncovers the issue. For any vendor who takes security seriously, that’s still a good thing in the end. It helps improve the product, protects customers, and improves the overall security of the Internet. Non sequitur. Your facts are uncoordinated. When researchers discover a new vulnerability, they are faced with a choice of what to do with that discovery. One option is to act unilaterally, disclosing the vulnerability directly. From a purely mercenary point of view, they might make the highest return by taking the discovery to the dark web and selling it to anyone willing to pay, with no regard to their intentions. Of course, this option brings with it both moral and legal complications. It arguably does more to harm the security of our digital world overall than any other option, and there is no telling when, or indeed if, the vendor will become aware of the issue for it to be fixed. Another drastic, if less mercenary, option is Full Disclosure - aka the ‘Zero-Day’ or ‘0-day’ approach. Dumping the details of the vulnerability on a public forum makes them freely available to all, both defenders and attackers, but leaves no time for advance preparation of a fix, or even mitigation. This creates a race between attackers and defenders which, more often than not, is won by the attackers. It is nearly always easier, and faster, to create an exploit for a vulnerability and begin distributing it than it is to analyze a vulnerability, develop and test a fix, distribute it, and then patch devices in the field. Both approaches may, in the long term, improve Internet security as the vulnerabilities are eventually fixed. But in the short- and medium-terms they can do a great deal of harm to many environments and individual users as attackers have the advantage and defenders are racing to catch up. These disclosure methods tend to be driven primarily by monetary reward, in the first case, or by some personal or political agenda, in the second case. Dropping a 0-day to embarrass a vendor, government, etc. Now, Full Disclosure does have an important role to play, which we’ll get to shortly. Mutual Benefit As an alternative to unilateral action, there is Coordinated Disclosure: working with the affected vendor(s) to coordinate the disclosure, including providing time to develop and distribute fixes, etc. Coordinated Disclosure can take a few different forms, but before I get into that, a slight detour. Coordinated Disclosure is the current term of art for what was once called ‘Responsible Disclosure’, a term which has generally fallen out of favor. The word ‘responsible’ is, by its nature, judgmental. Who decides what is responsible? For whom? To whom? The reality is it was often a way to shame researchers – anyone who didn’t work with vendors in a specified way was ‘irresponsible’. There were many arguments in the security community over what it meant to be ‘responsible’, for both researchers and vendors, and in time the industry moved to the more neutrally descriptive term of ‘Coordinated Disclosure’. Coordinated Disclosure, in its simplest form means working with the vendor to agree upon a disclosure timeline and to, well, coordinate the process of disclosure. The industry standard is for researchers to give vendors a 90-day period in which to prepare and release a fix, before the disclosure is made. Though this may vary with different programs and may be as short as 60-days or as long as 120-days, and often include modifiers for different conditions such as active exploitation, Critical Severity (CVSS) issues, etc. There is also the option of private disclosure, wherein the vendor notifies only customers directly. This may happen as a prelude to Coordinated Disclosure. There are tradeoffs to this approach – on the one hand it gives end users time to update their systems before the issues become public knowledge, but on the other hand it can be hard to notify all users simultaneously without missing anyone, which would put those unaware at increased risk. The more people who know about an issue, the greater the risk of the information finding its way to the wrong people, or premature disclosure. Private disclosure without subsequent Coordinated Disclosure has several downsides. As already stated, there is a risk that not all affected users will receive the notification. Future customers will have a harder time being aware of the issues, and often scanners and other security tools will also fail to detect the issues, as they’re not in the public record. The lack of CVE IDs also means there is no universal way to identify the issues. There’s also a misguided belief that private disclosure will keep the knowledge out of the wrong hands, which is just an example of ‘security by obscurity’, and rarely effective. It’s more likely to instill a false sense of security which is counter-productive. Some vendors may have bug bounty programs which include detailed reporting procedures, disclosure guidelines, etc. Researchers who choose to work within the bug bounty program are bound by those rules, at least if they wish to receive the bounty payout from the program. Other vendors may not have a bug bounty program but still have ways for researchers to official report vulnerabilities. If you can’t find a way to contact a given vendor, or aren’t comfortable doing so for any reason, there are also third-party reporting programs such as Vulnerability Information and Coordination Environment (VINCE) or reporting directly to the Cybersecurity & Infrastructure Security Agency (CISA). I won’t go into detail on these programs here, as that could be an article of its own – perhaps I will tackle that in the future. As an aside, at the time of writing, F5 does not have a bug bounty program, but the F5 SIRT does regularly work with researchers for coordinated disclosure of vulnerabilities. Guidelines for reporting vulnerabilities to F5 are detailed in K4602: Overview of the F5 security vulnerability response policy. We do provide an acknowledgement for researchers in any resulting Security Advisory. Carrot and Stick Coordinated disclosure is not all about the researcher, the vendor has responsibilities as well. The vendor is being given an opportunity to address the issue before it is disclosed. They should not see this as a burden or an imposition, the researcher is under no obligation to give them this opportunity. This is the ‘carrot’ being offered by the researcher. The vendor needs to act with some urgency to address the issue in a timely fashion, to deliver a fix to their customers before disclosure. The researcher is not to blame if the vendor is given a reasonable time to prepare a fix and fails to do so. The ’90-day’ guideline should be considered just that, a guideline. The intention is to ensure that vendors take vulnerability reports seriously and make a real effort to address them. Researchers should use their judgment, and if they feel that the vendor is making a good faith effort to address the issue but needs more time to do so, especially for a complex issue or one that requires fixing multiple products, etc., it is not unreasonable to extend the disclosure deadline. If the end goal is truly improving security and protecting users, and all parties involved are making a good faith effort, reasonable people can agree to adjust deadlines on a case-by-case basis. But there should still be some reasonable deadline, remember that it is an undisclosed vulnerability which could be independently discovered and exploited at any time – if not already – so a little firmness is justified. Even good intentions can use a little encouragement. That said, the researcher also has a stick for the vendors who don’t bite the carrot – Full Disclosure. For vendors who are unresponsive to vulnerability reports, who respond poorly to such (threats, etc.), who do not make a good faith effort to fix issues in a timely manner, etc., this is alternative of last resort. If the researcher has made a good faith effort at Coordinated Disclosure but has been unable to do so because of the vendor, then the best way to get the word out about the issue is Full Disclosure. You can’t coordinate unless both parties are willing to do so in good faith. Vendors who don’t understand it is in their best interest to work with researchers may eventually learn that it is after dealing with Full Disclosure a few times. Full Disclosure is rarely, if ever, a good first option, but if Coordinated Disclosure fails, and the choice becomes No Disclosure vs. Full Disclosure, then Full Disclosure is the best remaining option. In All Things, Balance Coordinated disclosure seeks to balance the needs of the parties mentioned at the start of this article – security researchers, vendors, customers, consumers, and even random bystanders. Customers cannot make informed decisions about their networks unless vendors inform them, and that’s why we need vulnerability disclosures. You can’t mitigate what you don’t know about. And the reality is no one has the resources to keep all their equipment running the latest software release all the time, so updates get prioritized based on need. Coordinated disclosure gives the vendor time to develop a fix, or at least a mitigation, and make it available to customers before the disclosure. Thus, allowing customers to rapidly respond to the disclosure and patch their networks before exploits are widely developed and deployed, keeping more users safe. The coordination is about more than just the timing, vendors and researchers will work together on the messaging of the disclosure, often withholding details in the initial publication to provide time for patching before disclosing information which make exploitation easier. Crafting a disclosure is always a balancing act between disclosing enough information for customers to understand the scope and severity of the issue and not disclosing information which is more useful to attackers than to defenders. The Needs of the Many Coordinated disclosure gets researchers the credit for their work, allows vendors time to develop fixes and/or mitigations, gives customers those resources to apply when the issue is disclosed to them, protects customers by enabling patching faster than other disclosure methods, and ultimately results in a safer, more secure, Internet for all. In the end, that’s what we’re all working for, isn’t it? I encourage vendors and researchers alike to view each other as allies and not adversaries. And to give each other the benefit of the doubt, rather than presume some nefarious intent. Most vendors and researchers are working toward the same goals of improved security. We’re all in this together. If you’re looking for more information on handling coordinated disclosure, you might check out The CERT Guide to Coordinated Vulnerability Disclosure.108Views3likes0CommentsInSpectre, Rust/PANOS CVEs, X URL blunder and More-April 8-14, 2024-F5 SIRT-This Week in Security
Editor'sIntroduction Hello, Arvin is your editor for This Week in Security. As usual, I collected some interesting security news. Credit to the original articles. Intel processors are affected by a Native Branch History Injection (Native BHI) attack and the tool InSpectre, a tool that can find gadgets (code snippets that can serve as a jumping point to bypass sw and hw protections) in an OS kernel on vulnerable hardware. Spectre style attacks that abuses speculative execution on processors has been around for a while now. Intel updated their previous published article on "Branch History Injection and Intra-mode Branch Target Injection" guidance and included an "Additional Hardening Options" section. The silver lining in this, is the CVEs CVSS score are Medium severity. See the section snippets from the research paper of the researchers from VU Amsterdam that illustrates the use InSpectre tool. Rust has a critical CVE - CVE-2024-24576. It affects the Rust standard library, which was found to be improperly escaping arguments when invoking batch files on Windows using the library's Command API – specifically, std::process::Command. It is specific to the Windows OS cmd exe as it has complex parsing rules and allowed untrusted inputs to be safely passed to spawned processes. Next is a PAN OS Critical CVE, where it affects devices with firewall configurations with GlobalProtect gateway and device telemetry enabled. CVE-2024-3400 affects PAN-OS 10.2, PAN-OS 11.0 and PAN-OS 11, Updates to fully fix this CVE were made available from April 14. Refer tohttps://security.paloaltonetworks.com/CVE-2024-3400 Change Healthcare's worries on effects of a previous breach due to ALPHV ransomware group appears to be not over. Per the report, the victim organization was potentially "exit" scammed by ALPHV and is being pursued by the "contactor/affiliate" of the ransomware attack, RansomHub, demanding another round of ransom to be paid, else, they sell the exfiltrated data to the highest bidder. X/Twitter had an URL blunder where it converts anything with the string twitter in their site's tweets and then converts it to the letter X - example, netflitwitter[.]com will be converted to netflix[.]com. This behavior was reversed and back to usual, but X twitter[.]com URLs now properly converts to X[.]com. Lastly, a round up of issues from MS, Fortinet, SAP, Cisco, Adobe, Google/Android. As in previous TWIS editions, some of these news were a recurrence/follow up. In general, keep your systems up to date on software versions, secure access to them and allow only trusted users and applications to run. Implement layers of protections - updated AV/ED/XDR on Server and End User systems, Firewall/network segmentation rules/IPS to prevent further spread/lateral movement in the event of a ransomware attack (BIG-IP AFM have network firewall, IPS features that you can consider), a WAF to protect your web applications and APIs - BIG-IP ASM/Adv WAF, F5 Distributed Cloud Services, NGINX App Protect have security policy configuration and attack signatures that can mitigate known command injection techniques and other web exploitation techniques. End user security training and awareness, incident response and reporting will help an organization should that first phishing email reaches a target end user mailbox. If it feels "off" and looks suspicious, stop and ponder before clicking. I hope this edition of TWIS is educational. You can also read past TWIS editions and othercontent from the F5 SIRT , so check those out as well. Till next time! Rust rustles up fix for 10/10 critical command injection bug on Windows in std lib Programmers are being urged to update their Rust versions after the security experts working on the language addressed a critical vulnerability that could lead to malicious command injections on Windows machines. The vulnerability, which carries a perfect 10-out-of-10 CVSS severity score, is tracked as CVE-2024-24576. It affects the Rust standard library, which was found to be improperly escaping arguments when invoking batch files on Windows using the library's Command API – specifically, std::process::Command. "An attacker able to control the arguments passed to the spawned process could execute arbitrary shell commands by bypassing the escaping," said Pietro Albini of the Rust Security Response Working Group, who wrotethe advisory. The main issue seems to stem from Windows' CMD.exe program, which has more complex parsing rules, and Windows can't execute batch files without it, according to the researcher at Tokyo-based Flatt Security whoreported the issue. Albini said Windows' Command Prompt has its own argument-splitting logic that works differently from the usual Command::arg and Command::args APIs provided by the standard library, which typically allow untrusted inputs to be safely passed to spawned processes. "On Windows, the implementation of this is more complex than other platforms, because the Windows API only provides a single string containing all the arguments to the spawned process, and it's up to the spawned process to split them," said Albini. "Most programs use the standard C run-time argv, which in practice results in a mostly consistent way arguments are split. "Unfortunately it was reported that our escaping logic was not thorough enough, and it was possible to pass malicious arguments that would result in arbitrary shell execution." https://www.theregister.com/2024/04/10/rust_critical_vulnerability_windows/ It's 2024 and Intel silicon is still haunted by data-spilling Spectre Intel CPU cores remain vulnerable to Spectre data-leaking attacks, say academics at VU Amsterdam. We're told mitigations put in place at the software and silicon level by the x86 giant to thwart Spectre-style exploitation of its processors' speculative execution can be bypassed, allowing malware or rogue users on a vulnerable machine to steal sensitive information – such as passwords and keys – out of kernel memory and other areas of RAM that should be off limits. The boffins say they have developed a tool called InSpectre Gadget that can find snippets of code, known as gadgets, within an operating system kernel that on vulnerable hardware can be abused to obtain secret data, even on chips that have Spectre protections baked in. InSpectre Gadget was used, as an example, to find a way to side-step FineIBT, a security feature built into Intel microprocessors intended to limitSpectre-stylespeculative execution exploitation, and successfully pull off a Native Branch History Injection (Native BHI) attack to steal data from protected kernel memory. "We show that our tool can not only uncover new (unconventionally) exploitable gadgets in the Linux kernel, but that those gadgets are sufficient to bypass all deployed Intel mitigations," the VU Amsterdam teamsaidthis week. "As a demonstration, we present the first native Spectre-v2 exploit against the Linux kernel on last-generation Intel CPUs, based on the recent BHI variant and able to leak arbitrary kernel memory at 3.5 kB/sec." https://www.theregister.com/2024/04/10/intel_cpus_native_spectre_attacks/ fromhttps://download.vusec.net/papers/inspectre_sec24.pdf 2.2 Spectre v2 In 2018, the disclosure of Spectre [29] famously demonstratedhow speculation can be used to leak data across security domains. One variant presented in the paper, originally known asSpectre v2 or Branch Target Injection (BTI), shows how speculation of indirect branches can be used to transiently divertthe control flow of a program and redirect it to an attackerchosen location. The attack works by poisoning one of theCPU predictors, the Branch Target Buffer (BTB), which isused to decide where to jump on indirect branch speculation. Initially, mitigations were proposed at the software leveland, later, in-silicon mitigations such as Intel eIBRS [5] anARM CSV2 [12] were added to newer generations of CPUsto isolate predictions across privilege levels. 2.3 Branch History Injection In 2022, Branch History Injection (BHI) [13] showed that,despite mitigations, cross-privilege Spectre v2 is still possibleon latest Intel CPUs by poisoning the Branch History Buffer(BHB). Figure 1 provides a high-level overview of the attack. In summary, by executing a sequence of conditionalbranches (HA and HV ) right before performing a system call,an unprivileged attacker can cause the CPU to transientlyjump to a chosen target (TA) when speculating over an indirect call in the kernel (CV ). This happens because the CPUpicks the speculative target forCV from a shared structure, theBTB, that is indexed using both the address of the instructionand the history of previous conditional branches, which isstored in the Branch History Buffer (BHB). Finding the rightcombination of histories that will result in a collision can bedone with brute-forcing.To ensure the injected target, TA, contains a disclosure gadget, the original BHI attack relied on the presence of theextended Berkeley Packet Filter (eBPF), through which anunprivileged user can craft code that lives in the kernel. Figure 2: InSpectre gadget workflow. The analyst provides akernel image and a list of target addresses to InSpectre Gadget⃝1 , which performs in-depth inspection to find gadgets thatcan leak secrets and output their characteristics. The gadgetscan be filtered ⃝2 based on the available attacker-controlledregisters and the mitigations enabled, and used to craft Spectrev2 exploits against the kernel ⃝3 . Zero-day exploited right now in Palo Alto Networks' GlobalProtect gateways Palo Alto Networks on Friday issued a critical alert for an under-attack vulnerability in the PAN-OS software used in its firewall-slash-VPN products. The command-injection flaw, with an unwelcome top CVSS severity score of 10 out of 10, may let an unauthenticated attacker execute remote code with root privileges on an affected gateway, which to put it mildly is not ideal. It can, essentially, be exploited to take complete control of equipment and drill into victims' networks. Updates to fully fix this severe hole are due to arrive by Sunday, April 14, we're told. CVE-2024-3400affects PAN-OS 10.2, PAN-OS 11.0 and PAN-OS 11.1 firewall configurations with a GlobalProtect gateway and device telemetry enabled. Cloud firewalls, Panorama appliances, and Prisma Access are not affected, Palo Altosays. Zero-day exploitation of this vulnerability was detected on Wednesday by cybersecurity shop Volexity, on a firewall it was monitoring for a client. After an investigation determined that the firewall had been compromised, the firm saw another customer get hit by the same intruder on Thursday. "The threat actor, which Volexity tracks under the alias UTA0218, was able to remotely exploit the firewall device, create a reverse shell, and download further tools onto the device," the networks security management firm said in ablog post. "The attacker focused on exporting configuration data from the devices, and then leveraging it as an entry point to move laterally within the victim organizations." The intrusion, which begins as an attempt to install a custom Python backdoor on the firewall, appears to date back at least to March 26, 2024. Palo Alto Networks refers to the exploitation of this vulnerability as Operation MidnightEclipse, which at least is more evocative than the alphanumeric jumble UTA0218. The firewall maker says while the vulnerability is being actively exploited, only a single individual appears to be doing so at this point. mitigations include applying a GlobalProtect-specificvulnerability protection, if you're subscribed to Palo Alto's Threat Prevention service, or "temporarily disabling device telemetry until the device is upgraded to a fixed PAN-OS version. Once upgraded, device telemetry should be re-enabled on the device." It urged customers to follow the above security advisory and thanked the Volexity researchers for alerting the company and sharing its findings. ® https://www.theregister.com/2024/04/12/palo_alto_pan_flaw/ https://www.volexity.com/blog/2024/04/12/zero-day-exploitation-of-unauthenticated-remote-code-execution-vulnerability-in-globalprotect-cve-2024-3400/ https://unit42.paloaltonetworks.com/cve-2024-3400/ Change Healthcare faces second ransomware dilemma weeks after ALPHV attack Change Healthcare is allegedly being extorted by a second ransomware gang, mere weeks after recovering from an ALPHV attack. RansomHub claimed responsibility for attacking Change Healthcare in the last few hours, saying it had 4 TB of the company's data containing personally identifiable information (PII) belonging to active US military personnel and other patients, medical records, payment information, and more. The miscreants are demanding a ransom payment from the healthcare IT business within 12 days or its data will be sold to the highest bidder. "Change Healthcare and United Health you have one chance in protecting your clients data," RansomHub said. "The data has not been leaked anywhere and any decent threat intelligence would confirm that the data has not been shared nor posted. The org is alleged to have paid a $22 million ransom to ALPHV following the incident – a claim made by researchers monitoring a known ALPHV crypto wallet and one backed up by RansomHub. However, Change Healthcare has never officially confirmed this to be the case. If all of the claims are true, it means the embattled healthcare firm is deciding whether to pay a second ransom fee to keep its data safe. the prevailing theory among infosec watchers is that ALPHV pulled what's known as an exit scam after Change allegedly paid its ransom. While the ratios vary slightly between gangs, generally speaking, ransomware payments are split 80/20 – 80 percent for the affiliate that actually carried out the attack and 20 percent for the gang itself. It's believed that ALPHV took 100 percent of the alleged payment from Change Healthcare, leaving the affiliate responsible for the attack without a commission. Angry and searching for what they believed they were "owed," the affiliate is thought to have retained much of the data it stole and now switched allegiances to RansomHub in one last throw of the dice to earn themselves a payday, or so the theory goes. UnitedHealth, parent company of Change Healthcare,discloseda cybersecurity incident on February 22, saying at the time it didn't expect it to materially impact its financial condition or the results of its operations. It originally suspected nation state attackers to be behind the incident, but the ALPHV ransomware gang later claimed responsibility. Many of its systems were taken down as a result while it assessed and worked to remediate the damage. Hospitals and pharmacies reported severe disruption to services following the attack, with many unable to process prescriptions, payments, and medical claims. Cashflow issues also plagued many institutions, prompting the US government tointervene. The IT biz's data protection standards are soon to be subject to aninvestigationby the US healthcare industry's data watchdog, which cited the "unprecedented magnitude of this cyberattack" in its letter to Change. https://www.theregister.com/2024/04/08/change_healthcare_ransomware/ X fixes URL blunder that could enable convincing social media phishing campaigns Elon Musk's X has apparently fixed an embarrassing issue implemented earlier in the week that royally bungled URLs on the social media platform formerly known as Twitter. Users started noticing on Monday that X's programmers implemented a rule on its iOS app that auto-changedTwitter.comlinks that appeared in Xeets toX.com links. Attackers could feasibly copy legitimate web pages to steal credentials, or skip the trouble and simply use it as a malware-dropping tool, or any number of other possibilities. The potential for abuse here would be rife, given the number of legitimate, well-known brands most people would blindly trust. Netflix, Plex, Roblox, Clorox, Xerox – you get the picture. According to tests at Reg towers on Wednesday morning, the issue appears to have been reversed. Netflitwitter[.]com now reads as such, but Twitter.com is auto-changed to X.com.209Views2likes0Comments