XZ backdoor brainstorming
id | Author | Summary | Description | XZ impact | Discussion | see also | workstream leaeder |
---|---|---|---|---|---|---|---|
1 | [Marcus Meissner] | identify every OSS developer | register all OSS developers in a big database with their passports | Perhaps | [Marcus Meissner] : this seems legally impossible
[Filippo Bonazzi] : See also[1](https://shkspr.mobi/blog/2021/02/whats-my-name-again/) [Johannes Segitz] also very unlikely to help with nation state actors [waix3xeiQu]: maybe not passports and maybe not "all" developers, but some database about OSS maintainers trustworthiness could be something doable, and indeed that is what we have with gpg keyservers and signed keys. [isae1zaiTh] Determine a set of relevant projects with low "identifiabity"/ trust. Invite them to osc/SUSEcon and a key signing party there. [Filippo Bonazzi] +1 on Johannes, a nation state actor would happily send their person to osc and be all smiles and sign everything. doing this could even be counterproductive [Lars Marowsky-Bree] Not allowing pseudonyms would also be marginalizing certain communities at risk of online harassment and doxing. [Andrea Manzini] people can have high level of trustiness and be / become bad actors when threatened, forced or bribed. Think ofKlaus Fuchs, it's not like 80 years ago in Los Alamos they were letting just anybody participate. OSS Projects should ideally be so resilient that even one bad actor would not be able to overcome and inject backdoors. [Jan Zerebecki] current openpgp/gpg WoT tooling is insufficient for even the task of "can this person be trusted to do basic precautions for not compromising their own key?". E.g. there are no negative signatures. I tried asking some people (who should really know how it works) if the would revoke for a specific case, and their answer made clear that they do not understand the details and refuse to revoke. Also IMHO the majority of software is written with architectural decisions that are incompatible with security, so even if a "real" identity is well verified, needing to untrust the majority of the identities means you can not assemble a working distribution. Yes, relying on government id (see state actors) or forbidding pseudonyms is a bad idea. However all these issues can be fixed. So I think we would do well to fund making well working WoT tooling. Beyond just establishing a secure channel, having people meet at least a few times in person and be well connected is import for doing security reviews well. My understanding is that a pervasive zersetzung of meeting in person was partially successful. Which again can be repaired, which is worth funding. We should focus in this area on post-merge reviews instead of the authors/developers of software, if only because reviews can be done multiple times by people that do not trust each other. Do we have evidence about the nature of the connectedness of Jia Tan to others, e.g. do we have evidence that it is a pseudonym maintained by a group; or did anyone meet this person in the physical world? [Stanislav Brabec] A new core developer with push permission and no/short overall developer history could indicate a potential risk. Projects that have such developers present a potential risk. [xae8eiQui6] From my perspective, the anonymity was not the issue in this case, because we do face the situation that a source can't really be trusted in OSS quite often. Instead, our reviews should ensure security, at least of key components, and given that we have the source code, we are able to review it. [Matej Cepl] I have spent years (still as a lawyer) thinking about the certification of real identity and I don't believe it is possible without concentrated involvement of all governments around the world (in the context of EDS and e-commerce). No matter software distribution, but instituting identity of parties online would completely revolutionize e-commerce. However, given many governments (looking at you, USA) will never allow to constitute their citizens' identity, it will never happen. I don't want to discuss, whether it is good or bad, that's another discussion. |
n/a | n/a |
2 | [Marcus Meissner] | tarball and git commit signing | require all commits and tarballs and other artifacts signed | No | [Marcus Meissner] good idea in general!
[Johannes Segitz] agreed, just for make issues like that traceable [Fabian Vogt] Many upstreams don't do this and it's also not really that much of a benefit. Anything on GH has bogus "Verified by GitHub" signatures which have basically negative value. [Eugenio Paolantonio] related (for upstreams): [2](https://github.com/cgwalters/git-evtag) [Eugenio Paolantonio] downstream package maintainers might check eventual diffs between the (signed) tarball and the related VCS tag, and pin the git reference, this might be a solution for upstreams that don't sign their commits (but a tag can be moved, so it perhaps better to pin a sha1 commit directly rather than the tag in that case) [Dirk Mueller] does not protect against a malicious upstream maintainer, as they can sign all day long. [Ricardo Branco] tarballs are not stable in git forges as they are generated from `git archive` and `git` may change: [3](https://github.blog/2023-02-21-update-on-the-future-stability-of-source-code-archives-and-hashes/) [Jan Zerebecki] [4](https://gitlab.com/source-security/git-verify) is a prototype for making signing an verification easier. It deals with "Verified by GitHub" by maintaining the projects authorized public keys in the git repo, so by not authorizing it you do not accidentally trust it. For new repos you should deal with that key by not assigning it trust. Author and reviewer signatures help by differentiating different authors when only one is compromised, to reduce the amount of code you need to look at after it was noticed. tarballs should only be signed by automated builds with the assertion that they are reproducible from VCS, see idea reproducible builds (23) and detecting differences in tarballs (3). We should then run a reproducer which also publishes such signatures. This also helps with changes to the tar output with changed Git versions as the reproducibility assertion has the information to use the same Git version, which then produces bit identical output. Yes, git submodules or obs\_scm pinned to a commit-id are easier. [xae8eiQui6] I would prefer signed tarballs either by the original issuer or by a later reviewer for key components regarding security for better tracing. |
n/a | [Cathy Hu] |
3 | [Wolfgang Frisch] | upstream repos: detect differences between git tags and released tarballs | can be automated, but needs ignore lists for exceptions, e.g. generated autotools files sometimes only found in release tarballs | Perhaps | related[src.tar.gz proposal](https://blog.josefsson.org/2024/04/01/towards-reproducible-minimal-source-code-tarballs-please-welcome-src-tar-gz/)
This would detect always differences for GNU auto\* based projects, how to verify that? The enablement of the backdoor in xz case happened exactly here. [Johannes Segitz] probably tricky to do at scale, but we should check it [Fabian Vogt] Given that there's a move to use VCS directly in the future instead of upstream tarballs, this fits nicely in any case [Jan Zerebecki] generated autotools files are a build step, so reproducible builds should be used, see idea (23). Even if autotools output is committed in the repo, that part should be tested to be reproducible by the CI, with machine readable information in git which files are generated and how to reproduce. [Stanislav Brabec] Generated files should not be present in git. (There are some exceptions like pot files for Weblate.) And there should be no exceptions for the rest. But the generated files like configure can also contain a malicious code. A solution could be trusted environments for the automated tarball generation and pushing, exactly as OBS does with RPM packages. The trusted environment robot could push signed tarballs that certify that the tarball was generated by a known process (e. g. "./autogen.sh ; configure ; make dist") in a genuine way. |
n/a | [Wolfgang Frisch] |
4 | [Filippo Bonazzi] | drop all usages of upstream tarballs, use obs\_scm | Obtain code to package in OBS via Git rather than tarball | Partially | That would conflict with existing policy of the distros that the sources must not be modified by OBS. With manual mode the critical trust point just moves from upstream maintainer to distro maintainer. IMHO 11) is the better way. (NOTE: Please add your name to comments such as this one)
[Fabian Vogt] Closely related to 3)? [Dirk Mueller] then upstream would have committed the backdoor to git. just in a different obfuscation. [OhPhiu0Vai] distro mantainer is still a critical trust point regardless, but this proposal would still ensure that the code we are building is 1:1 with upstream [Matej Cepl] cgwalters mentioned on Mastodon his project [5](https://github.com/cgwalters/git-evtag) as possibly helpful here. [Jan Zerebecki] AFAIK this does not conflict with policy. Though we still need to implement testing for reproducibility of this, see [6](https://github.com/openSUSE/obs-service-source_validator/issues/134) . I currently do this manually when reviewing submit requests. |
11 | [Filippo Bonazzi] |
5 | [Filippo Bonazzi] | add data / binary blob review as part of software audits | Take an at least cursory look at data (everything that is not code) contained in software submissions to OBS.
I have some personal ideas that I am trying in this direction |
Perhaps | [Johannes Segitz] difficult to scale unless automated. Once it's automated it won't help anymore. But getting an overview of how common this is would be valuable in itself
[Fabian Vogt] Is there a way to enforce that no upstream binaries are executed during build? [Andrea Manzini] I'd say at least reject any .o files without debug symbols in an automated build process. [Jan Zerebecki] in many cases they can be replaced with source and a build step. E.g. the test files from xz could be generated during testing with the just built xz, checked against a hash to avoid an error in the compressor and for the corrupt samples a correct one can be modified in a way that is understandable. There are various other strategies to prove there is nothing is up my sleeve, see e.g. [7](https://safecurves.cr.yp.to/). [xae8eiQui6] Since everything else than source code can be a lot, e.g. images (you might even execute parts of them in a test suite), I think it would be crucial to identify what is done with BLOBs or what they might do and specialized AI might be helpful there. I still would mostly consider this for specific packages crucial for basic security. |
n/a | [Filippo Bonazzi] |
6 | [Bernhard Wiedemann] | support struggling upstream developers | This tiny project was maintained by a single person (Lasse) who was struggling with health issues, which gave leverage to pass maintainership to a stranger. See also [8](https://xkcd.com/2347/) . [openssf.org](http://openssf.org) or another foundation could ensure vendor-neutrality | Perhaps | see also #20
[Matej Cepl] I appreciate the sentiment (I am a solo upstream maintainer as well [M2Crypto](https://gitlab.com/m2crypto/m2crypto)), but I don't know how would you identify the struggling ones, and how to organize it logistically? Creating some kind of para-Linux Foundation? Shouldn't we just join forces with people who already do it ([9](https://sfconservancy.org/), [10](https://tidelift.com/) et al.)? [Ja6baiSeiR] It's less about replacing the Linux Foundation, more about analysis who's struggling (because of working alone, see #20) and go to them and say, here you can get help, either we employ them or they get money from one of the existing foundations [Andrea Manzini] sadly now if someone jumps in to offer help in any low-maintained project risks to being mistaken for someone with bad intentions [Ja6baiSeiR] i do not think that's the case, that only happens if your internet history is practically non existant. If you are visible then this is still possible. Satoshi Nakamoto might have problems though :D [Lars Marowsky-Bree] Projects like the Sovereign Tech Fund, OpenInfra, etc already aim to do this, but they do require maintainer outreach. However, a project health assessment (such as CHAOSS) for the entire SBOM we ship should help identify such struggling projects that have "very few" active contributors and/or whether they're all in a single company etc. (Bernhard wonders: is "single company" good or bad here?) [Kristyna Streitova] I think that both approaches should be used here. Both actively identifying struggling projects and offer help (#20), and having an organization that can educate struggling maintainers about their options, provide support to them if they reach out, and connect them with other existing projects and organizations. [Marcus Meissner] after heartbleed there was a "Core Infrastructure Initiative" launched. [11](https://en.wikipedia.org/wiki/Core_Infrastructure_Initiative) This was turned into the OpenSource Security Foundation, which introduced "software criticality index" to identify important projects. [Paul McKeith] Protections from abuse of a struggling developer assistance program / policy would be required. It would also need to consider a sudden or temporary "struggling" event or sadly and unexpected death. Most nefarious would be an extortion event, especially by a state actor. |
47 | n/a |
7 | [Filippo Bonazzi] | consider dropping unsupported/shaky upstreams | Especially for SUSE products, consider dropping and/or substituting abandoned/shaky upstream software.
Single-maintainer or effectively-single-maintainer software should be considered a smell. This should be evaluated while auditing upon Factory submission, and also retroactively at significant moments (like right now given current events, or when submitted to a new codestream) |
Perhaps | [Johannes Segitz] this is definitely something we should consider and also where just having the information itself and being aware of the risk would be helpful. But I fear that we will realize that this is not uncommon
[Filippo Bonazzi] for openSUSE this is likely inevitable. for SUSE we could try to set a higher bar [Ja6baiSeiR] here also an ongoing automated analysis would help to see which projects are actually shaky (see my suggestion in #20) [OhPhiu0Vai] How do we handle software like xz in this case? Do we maintain the code? Writing a replacement is highly unlikely [Ja6baiSeiR] Depends on the software (and existing alternatives). We could implement it ourselves, help out upstream, help going to the OSSF to get funding, just drop the functionality.. There are different ways to do this [Adam Majer] I'm afraid we would end up with very few upstreams. Think how shaky OpenSSL was upstream prior to Heart Bleed wakeup call and that's just one example. [Lars Marowsky-Bree] Similar to the previous suggestion, making a (regularly updated) project health assessment part of the metrics looked at before inclusion could help. (What we then do about this is the next question.) [Matej Cepl] to [OhPhiu0Vai] that should be a standard practice anyway: when we have a package with dead upstream and many patches, we should either take over or make a new upstream (that's how I became maintainer of M2Crypto or how for some time now ours Marcela created cronie). [xae8eiQui6] It appears reasonable to me overthinking using e.g. C code if there is a Rust project replacing it. And to overthink the usage of code that appears somehow strange. But I don't see that it is reasonable to judge the code quality from the number of contributors or maintainers. From my perspective, there are a few packages we need to ensure ourselves they are OK if we really need them. |
n/a | [Johannes Segitz] |
8 | [Filippo Bonazzi] | separate code and tests when building | When building software, delete everything that is not the source code of the deliverable software (all tests, data, ...). After software is built, execute any tests using a read-only version of the deliverable software. | Yes | note by[Bernhard Wiedemann] : the rpm %check section can modify package content
[Filippo Bonazzi] It would help by removing access to the test payload while building [OhPhiu0Vai] It would help when paired with 4) and 11) [Matthias Eckermann] Maybe we could have two OBS runs? One with and one without tests. This would show whether the test run changes the package, correct? [eoy5Luth6X] No need for two OBS, two repositories would be enough (with different config). However, something would need to remove the test files and if that is known, the attack could be easily modified to bring the backdoor eg. in some other file like images. IMHO seperating QA checks may make sense for other reasons, but it won't help if we already can not trust the upstream repo. [Matej Cepl] Isn't it current packaging guideline that `%check` must be run against installed files not against the `test/` subdirectory? [xae8eiQui6] Even if the attack might be modified, I like the idea to reduce the files needing to be reviewed. [Berthold Gunreben] If you take a snapshot of the filesystem (btrfs?) after %build and compare after %check, you still would have a read-write filesystem but could easily detect modified files. |
n/a | [Filippo Bonazzi] |
9 | [Bernhard Wiedemann] | use more sandboxing | redesign upstream software to use e.g. seccomp filters and/or separate sub-processes without permissions so that any code running in there needs less trust. Can reduce performance, so probably mostly for root-daemons and suid software. | Perhaps | [Marcus Meissner] FWIW openssh uses seccomp already, we should find out why this was not used / effective.
[Jan Zerebecki] there is also landlock, seccomp alone is not enough to sandbox certain things |
n/a | n/a |
10 | [Filippo Bonazzi] | move away from autotools | Classic software like autotools and others make it very easy to hide obfuscated instructions in the extensive and difficult to read files that they produce. Moving to more modern tools would at least raise the bar for backdoor injection | Perhaps | [Filippo Bonazzi] already notes: very difficult to do in the general case when upstream is using one of these legacy tools.
we would need to carry the migration burden internally. risk vs reward [OhPhiu0Vai] We could put a policy to inspect the one that are not using modern tools (e.g. cmake/meson). Plus, if a couple of distro push on that, maybe upstream could agree to "update" [Ja6baiSeiR] there was also this cmake change by the supply chain attacker. you can do this in many toolings. [Zie1vie8le] TBH the cmake change only broke landlock support. hiding the whole decoder stuff for the exploit payload would have been a bit harder. [Jan Zerebecki] You said, produce. So this is talking about the build output of autotools? Maybe reproducible builds (23) or always rerun autotools during build, which could be used to prove the output is what was generated, is the easier road, than replacing autotools. |
n/a | [Cathy Hu] |
11 | [eoy5Luth6X] | Avoiding tar balls, referencing upstream git directly in package git | We have a possible mechanic in git based packaging that we can pin the upstream git via a git submodule. This allows to store the identifier (atm unfortunatly still SHA-1 often) of the upstream git. Furthermore it makes diffing, including commit logs of upstream more easy and allows further checks. This also avoids injection of additional content and allows to provide provable identifier via SBOM, so it makes the additional value also visible to the customers.. Example: [12](https://github.com/adrianschroeter/git-example-4) . | Partly | Duplicate of 4
[Filippo Bonazzi] not a duplicate of 4, different angle. Please add your name to comments. It goes in the same direction as 4, but adds also additional value to make it fully reproducible and trackable for SBOM. [Dirk Mueller] we already store a full copy of the sources for traceability, linking a sha1 does not add value (when the reference disappears, what do we do? fix it) [Ja6baiSeiR] do i see this correct, that for this we need less SHA-1, more signing and the guarantee there's no force-overwrite of the history though? Answer: Well, both is anyway not directly under our control and in all cases a problem of the upstream project, no matter how we consume their sources. We can of course setup our package git with SHA-256 and no force-overwrite. But it will often still point to a SHA-1 upstream repo. While it is true that we always have and will have a copy of the sources, we win an identifier to the used upstream repo. We can detect rewritten history there, we can report these in SBOM for additional supply chain trust by our customers and we avoid injected modifications in the tar ball building process. |
4 | [Filippo Bonazzi] |
12 | [Filippo Bonazzi] | Partly related to #11: prevent upstream repo history rewriting | One item I've seen discussed is how even commits preceding Jia Tan's maintainer access could have feasibly been compromised retroactively.
This is very difficult to do because many parties would see git conflicts when history is changed. We could be one of these vigilant parties. e.g. save repo state when submitting to OBS, notify upon changes to commits preceding last save. |
No | Ja6baiSeiR: You can configure git that this is prohibited/notified. +1 on the mirror and verify idea
We mirror many upstream git repos onto gitlab.suse.de - It would be possible to report force-pushes there. [Fabian Vogt] Most projects try to do this already. If someone wants to hide changes, it's easier to do so in merge commits. [Filippo Bonazzi] true, it is trivially and shockingly easy to hide changes in merge commits. could we do something about this? [eoy5Luth6X] We don't have control about upstream git, but it is already handled in the sense that we store also the old state always with the scmsync mechanic of git packaging on OBS side. We could extend it also to store the git history on each update in addition. But we have in any case all code states ever used by use, independend of the upstream git server state. |
n/a | [Johannes Segitz] |
13 | [Alexandre Vicenzi] | FLOSS Scoring
Give a score to each package |
In the automotive team we looked into[OSS scoring](/display/ATeam/OSS+scoring), a few tools to classify an open source project (we even created one based on others), how critical, how bad, how safe/unsafe, and else.
This tool could guide us on which projects are critical for us, thus we need to look closer and help upstream, which projects are very high risk and which are not. We can outline potential threats, see [13](https://xkcd.com/2347/). |
Perhaps | Bernhard Wiedemann: scoring is helpful to discover weak upstreams. This gives a better basis to decide on actions (help/drop/replace)
[Johannes Segitz] [14](https://openssf.org/) has a working group for this [Alexandre Vicenzi] [15](https://fossa.com/) can track certain things (vulnerabilities and licenses mainly) [Lars Marowsky-Bree] This makes sense. There were quite a few talks about similar metrics at FOSS Backstage. [Paul McKeith] I like a scoring idea but often results in a false sense of security. And scoring systems must evolve with attacks and technology. |
n/a | [Johannes Segitz] |
14 | [Zie1vie8le] [Bernhard Wiedemann] | Have our maintainers actually be involved upstream | give our package maintainers enough time to actually be involved in upstream of their packages. have them review pull requests and code changes so we can maybe get an idea of smelly code that gets committed. e.g. the ifunc stuff seemed smelly to begin with for liblzma. but current workload is probably already too high so would need more hires and also training for our devs/packagers. This would be even more tricky for all the applications where we allow vendoring. as you would need to monitor the whole chain. which is another point against vendoring.
Pro would be that this work could already help upstream especially helping to vet pull requests. Plus our devs could give feedback to upstream how this would affect distributions and also get early indications if they should test an upstream change. I do that for a few packages. but not all. |
Perhaps | [OhPhiu0Vai] This would mean becoming an expert in that package/field. Considering just a base system of 2000 packages, that's still hard to push
[Ja6baiSeiR] Well, we are an operating system and kinda-sdk-vendor, we should have that knowledge anyway. [OhPhiu0Vai] I'd argue that having operating system knowledge is different than having e.g. compression knowledge related to a specific format like xz [OhPhiu0Vai] I'd also like to add that it's hard to test packages before a new release because we don't have something like Gentoo git packages [Zie1vie8le] you dont have to be an expert in your package. but keeping an eye on pull requests and commits. maybe you see a commit that looks like security and can forward it to our security. also OBS can happily test packages from git. [Bernhard Wiedemann] says: this would need a lot of people - hundreds. It can also deliver excelent benefit to customers in other areas, e.g. L3-support. So might be worth it. [Kristyna Streitova] I agree, it would mean to significantly increase the number of packagers. But, maybe we can be involved at least in selected important projects. We can also coordinate with other distributions so we don't have to cover upstream projects where the RH/Debian engineer is already involved. Together we can cover everything important and it would cost us fewer resources. [joongeJi0e] I fully agree with Kristyna's comments and suggestions. We need to take a few steps at a time, first focusing on upstream projects where there is no major coverage by engineers from other distros, but at the same time these projects are critical for our products. [ier7shaiJi] No, this isn't going to fly; Just think about 600 orphaned packages. Which we maintain in 'minimal' mode. |
n/a | n/a |
15 | [Ja6baiSeiR] | Employ enough developers to do independent code reviews regularly | We would need more developers to look at upstream code and regularly inspect if the code and build is fine. At least for all packages in a minimal base like microos.
Perhaps even with review sign-off by developers of different geopolitical regions to build global trust. |
Perhaps | Overlaps with #14 | n/a | n/a |
16 | [Felix Niederwanger] | Reduce dependencies for security-related software | Arch Linux was e.g. [not affected by this backdoor](https://archlinux.org/news/the-xz-package-has-been-backdoored/), because they don't include systemd support for openssh. We could evaluate, if for security-related packages (e.g. ssh) reduction the number of dependencies at the cost of removing some functionality might be a desirable goal. | Yes / Perhaps | [Filippo Bonazzi] "security-related" software quickly grows to a massive list (also part of the problem)
[Fabian Vogt] Should probably also involve build-time dependencies [ie5feu2Aij] maybe we can at least have some visualisation/graph and then identify really important packages as security-related [Michal Koutny] to [ie5feu2Aij] Something like[PageRank](https://en.wikipedia.org/wiki/PageRank) over the graph. [Wolfgang Frisch] Yes, for example `ldd /usr/sbin/sshd |wc -l` => 28. All necessary? [Wolfgang Frisch] New discussion about using dlopen() for optional dependencies: [16](https://lwn.net/SubscriberLink/969908/c6484b5638ccac1b/) |
35 | [Cathy Hu] |
17 | [eaqu3eNgei] | Collect hashes from reproducible builds in a centralized service | A public available service that everyone can query or send their hash to. It can be used to spot if your build differs from others. | No | [Ja6baiSeiR] The hashes need not only to be from Git, but also from Source and Binary Tarballs. Also it would not had helped with the xz incident as the developer injected that.
Still it would be a relatively more cost effective thing of having a database to check against. [Marcus Meissner] "rekor" from sigstore was made for this. |
n/a | n/a |
18 | [Michael Vetter] | disable inactive OBS package maintainers | We have known inactive Package Maintainers in OBS, who have full maintainer permissions.
E.g. People like Pascal Bleser etc. We should disable their access in a coordinated fashion (like after x months of inactivity) to avoid account hijacking. |
No | [eoy5Luth6X] I am against in disabling their account, as we should not discourage them to come back.
But we could consider removing their maintainer role in devel projects in a consolidated way. However, we may should not change that anymore for classic OBS maintained packages, but instead think about how we deal with this in the package centric/git approach instead. [Filippo Bonazzi] +1 on removing maintainer role rather than disabling accounts. [Fabian Vogt] +1, especially for project maintainers. Could be handled like 30d autoaccept delete requests. [Eugenio Paolantonio] not related with disabling the account, but mandating 2FA in the public OBS as well might be helpful in preventing hijacks (not adding that in its own row as I think it's already in the plans, but would be nice to speed it up) [Filippo Bonazzi] 2FA requirement can only be added AFTER U2F/security key support if we want to avoid an angry mob [Paul McKeith] Why not disable inactive accounts but not delete and with a thorough process to re-activate the account? Regular confirmations and alerts to a properly vetted maintainer would be one such process. Also note my above comment about extortion of a maintainers. An inactive maintainer could be extorted into coming back to a project along with his/her reputation still in tact. |
n/a | [Alexander Bergmann] |
19 | [Andrea Manzini] | fingerprint IFUNC resolvers in order to detect anomaly changes | when a binary has for some versions the same IFUNC resolver, a sudden change maybe legit but at least suspect and worth checking. If we able to inspect the ELF binary and see where[17](https://sourceware.org/glibc/wiki/GNU_IFUNC) resolvers points we may collect pattern | yes / Perhaps | [Fabian Vogt] There could also be a distro-wide check about libraries with conflicting symbols. That would also catch non-malicious bugs.
See also #21 |
21, 46 | [Robert Frohl] |
20 | [Ja6baiSeiR] | Go over all upstream projects and do a maintainer analysis | Do an **automatic** **ongoing** maintainer analysis (+overview) on the upstream projects (not in OBS) but not for identifying suspects, but for looking at projects where only one maintainer shares the burden. Doing the analysis will most likely result in some discussion also, WHAT's a good differentiator here, so we can say, okay we have to support here, it's only 1 person | perhaps and only partially | See also #6, #7, #13
[isae1zaiTh] SUSE should determine which projects like this are in for example the Minimal ISO and then investigate either sponsoring or have employees whose job (\*not extra volunteer work but something actually project managed\*) is to be the second contributor. Perhaps in collaboration with other projects (openELA?) [Ja6baiSeiR] in case we plan to do this: we have to be careful with the results of such analysis. this cannot be public. for one this is interesting for attackers, second, this could make a bad impression on single people, third it could look like we do surveillance on opensource people.. which technical minded people are not too happy about often. still, IMHO it's worth a try. |
n/a | n/a |
21 | [Andrea Manzini] | disable ifunc on selected critical packages and dependencies | build some packages and their dependency that runs as root like systemd or sshd with --disable-ifunc. see for example [18](https://github.com/google/oss-fuzz/pull/10667) that triggered some raised eyebrows | yes | Related to 19
[OhPhiu0Vai] musl consider it an anti-feature as well |
19 | [Robert Frohl] |
22 | [Ja6baiSeiR] | have a ongoing automated list, which upstreams support reproducible builds | Not having reproducible builds introduces some risk that down the line, there's some injection. Let's automate that and check whether upstream does reproducible builds and/or SBOMs (including the src.tar.gz proposal), so we have to worry a bit less about them. It also identifies area we could put some work in. | partially | note by Bernhard: according to a [recent study by the Linux Foundation](https://www.linuxfoundation.org/research/maintainer-perspectives-on-security?hsLang=en), 56% of upstream devs support reproducible builds. And my tests show 96% (99% with build-compare-filters) of our packages are reproducible.
Our packages, yes, but upstream also? [OhPhiu0Vai] there is no infrastructure for checking that on a developer point, afaik. To explain better: There is no tool, Github CI/Gitlab CI that I can just enable and gives me a report. Bernhard is checking with the openSUSE packages builds, but I think it would help the developers if they could check themself. The documentation helps but it's long and you aren't even sure if your program is affected. Could we not clone such projects (with their pipelines) in GH/GL and redo the build and check if the releases are the same? and if not.. well, not reproducible? [OhPhiu0Vai] Yea, but it would still be helpful if upstream does the work...after all it's their software. Yeah, but because upstream does not DO the work, we have to have the list :) more philosophical discussion in the chat though. |
17 | n/a |
23 | [Bernhard Wiedemann] | support reproducible builds | This allows to verify build results independently so that we need less trust in OBS
[19](https://en.opensuse.org/openSUSE:Reproducible_Builds) is ongoing and might reach 100% in 2025. Helps only with the [build step](https://github.com/bmwiedemann/reproducibleopensuse/blob/presentation/presentation/img/codeflow.svg) |
Partially: a) if the tar release was tested to be reproducible from git, but the attacker would have chosen a different way b) helped with forensics | [Jan Zerebecki] it helped during foresics according to oss-sec thread. Reproducible tar archives would have prevented these to be different, but the attacher would have chosen a different way. | 17 | n/a |
24 | [Matej Cepl] | automatic revendoring of Go/Rust packages | In the real world, where we don't have all dependencies of Go/Rust packages unbundled, some mechanism to automatically revendor all those packages, would help. | Yes (cleanup, not prevention) | [OhPhiu0Vai] something like Fedora approach?
[Filippo Bonazzi] would not have helped in the xz case, please set column to No. ??? Why not? Do we even know which Go/Rust package is affected? And it would probably not help with prevention, but at least now with the cleanup? [Filippo Bonazzi] I see your point, helps with the aftermath [Christian Goll] Although it wouldn't have helped with xz, having automatic vendoring can close a big attack surface in obs as the vendor tar balls or source tar balls with the vendor directory included are created on the developers machines. [Matthias Eckermann] I think this should be a "No" as xz is neither written in Go nor in Rust. [Matej Cepl] can you fix, please? At [Matthias Eckermann] Actually, I still believe it would help with cleanup in other direction: I am certain that at least some Go/Rust packages include bindings for `xz` and bundling it. |
n/a | n/a |
25 | [Eugenio Paolantonio] | ease review and sharing of downstream patches | Upstreams might not be aware of downstreams modifcations. It would be nice to have a central place where major distributions share own patches (which it goes to the benefit of both downstreams and upstreams if interested).
|
Perhaps | [OhPhiu0Vai] Why not upstreaming and have certain features optional if upstream doesn't want them enabled by default? We should try to have a minimum of downstream patches, sharing them doesn't seem ideal.
[Eugenio Paolantonio] upstream might not want that at all (see the actual openssh patch in this case, where it has been pretty much ignored) - having a place where these patches are shared and everyone can give feedback to is good in my opinion. A discussion might also arise on how to improve it (for example in this specific case, someone can argue against linking libsystemd and re-implement the notify protocol by themselves) As an example: [21](https://www.openwall.com/lists/oss-security/2024/03/29/23) - I totally ignored that Rocky had this change at all (but I totally agree with the upstream-first approach, and I'd argue that we should periodically re-view the patches we ship, even if it spells trouble for long term supported releases) |
n/a | [Johannes Segitz] |
26 | [Ja6baiSeiR] | Invest in fuzzing and performance regression testing | the supply chain attacker also tried changing stuff with oss-fuzz, also it would be good to have this analysis at hand. | perhaps partially | [Filippo Bonazzi] expensive since we are barely covered by openQA in our current state anyway. but makes sense
[isae1zaiTh] We do do performance testing both generic and sap-specific. We should share that more widely instead of hiding it in a gitlab repo and weekly reports to mailing lists. [Michal Koutny] For this situation, the proposed testing should start with TW (not wait until SLE). |
n/a | n/a |
27 | [Adam Majer] | Use AI for analysis | We can't possibly look at every patch or change that upstream does M-bM-^@M-^S there are too many. But it may be possible to use AI to flag suspicious changes. I've tried chatgpt on the exploit change, which in upstream would be simply "binary bad-3-corrupt\_lzma2.xz" and it did indicate that this could be an executable.
Use AI as a tool to potentially flag suspicious obfuscated binaries or source changes. Humans can then review any suspicious changes. Kind of akin to how we have the AI Lawyer in legaldb today. |
n/a | This would cost even more than rerunning every github/gitlab project just for checking if it is reproducible build.. but well, imho it's at least worth a try and would give us a possibility actually using AI for something concrete..
[Filippo Bonazzi] probably helpful in a general discovery sense - it might just catch something. I would not rely on it to exclude the presence of backdoors obviously. Also dangerous from a data sharing PoV, you would need to run it locally and make sure that any findings are not leaked (before actual disclosure). It could help an expert analyst by flagging something for manual review. [Fabian Vogt] Not sure how this could be trained to have any relevant detection ratio. IMO this can be a net negative if implemented wrongly, as it gives the apperance of protection but doesn't do anything. [Lars Marowsky-Bree] This will only work if it doesn't fall prey to the Base Rate Fallacy and ends up causing more effort and a false sense of security. I'm not opposed to it, but it will require significant staff investment as well. |
n/a | n/a |
28 | [Andrea Manzini] | detect function hijacking at runtime | after build, run binaries in a special sandboxed environment, with tracing at the kernel level (eBPF?) to report any function hijacking. Need a database / whiltelist of legit behavior and some tooling to exclude false positives . Someone need to read reports and decide on suspicious activity . Related to #9 | perhaps | n/a | n/a | n/a |
29 | [Bernhard Wiedemann] | extend SLSA level 4 to upstream | In 2022 we announced to follow rules of SLSA level 4 for SLE-15-SP4 and later. While this ruleset has a lot of useful items, this is of limited value, if upstream don't follow these. We could try to gradually extend it to upstream where it makes sense, starting with the most security-sensitive projects, providing hands-on support. | perhaps | [Adam Majer] unattainable since we don't control upstream.
[Bernhard Wiedemann] There is some cross-benefit with #14 - when we have SUSE packagers that are involved as upstream contributors, they can advocate for and help with improving certain aspects of SLSA. Even without this, we can offer public guides on best practices and point upstream contributors to it. |
n/a | n/a |
30 | [Ja6baiSeiR] | Ordering external code audits | If it is too expensive to employ more developers and auditors, we could at least order external code audits for some critical parts. Personally, I actually would like internal and external (regular) code and binary audits. Extern parties keep us honest. | perhaps | [Filippo Bonazzi] do you have any specific parties in mind to order security audits from?
Dennis: I think i remember some companies which do that and i could look them up, but i am sure we also have some colleagues who could recommend code audit companies as well. [Lars Marowsky-Bree] Questionable. These would have to be very numerous and very skilled, which means they'd likely be significantly more expensive than employees. It'd make more sense to funnel this money towards a foundation or research center which does this - and bundles investment from multiple stakeholders, SUSE being one of them. (And they'd have to happen before the code is shipped for the first time.) |
38 | n/a |
31 | [Dirk Mueller] | trace symbol / library size changes | in this case the backdoor increased the size of a 500 bytes function over 80kb. such a change, independent of whether it is malicious or not, should be detected and investigated | n/a | [Fabian Vogt] If actually malicious, this can be hidden trivially by just jumping to another function. New functions need to be excluded from the detection by design | n/a | [Paolo Perego] |
32 | [Marcus Meissner] | remove glibc rt-audit | From Andres Freund on oss-sec
This is one aspect I've, somewhat surprisingly, not seen discussed. From what I can tell the rtld-audit infrastructure significantly weakens -z now -z relro, by making it fairly easy for something loaded earlier to redirect symbols in later libraries / the main binary. |
Yes | It is being used, see e.g. [![](https://jira.suse.com/secure/viewavatar?size=xsmall&avatarId=15495&avatarType=issuetype)PM-3448](https://jira.suse.com/browse/PM-3448) - glibc backport for 2.34, but doing so for privileged processes might make sense. | n/a | [Marcus Meissner] |
33 | [ie5feu2Aij] | check auto-generated artifacts with upstream/template | maybe we could have a list of templates (e.g. build-to-host.m4) of commonly autogenerated files and that notifies us if there is a diff between the template and the used file during build | Yes | [ie5feu2Aij] easy to circumvent though and a lot of work
[Dirk Mueller] this is a good idea. I think we can have a list of "known good hashes" of autofoo files. |
n/a | [Wolfgang Frisch] |
34 | [ie5feu2Aij] | do not advertise tumbleweed as our main rolling release distro | advertise tumbleweed as "really only for testing, do not use in production" and e.g. slowroll as our main rolling release distro to use in production | perhaps | [Fabian Vogt] Doesn't really address the root cause, only has an effect if others catch it before it reaches our "main" distros
[ie5feu2Aij] : yep agree fabian [Filippo Bonazzi] I feel like it would needlessly ruin the reputation of TW, which is otherwise an excellent distro that works. choosing what distro to run in production / for server use is entirely up to the user. a rolling distro has the obvious disadvantage of being on the front line for new vulnerabilities. [Lars Marowsky-Bree] Quite opposed to this; for every rare malicious new security issue introduced, significantly older releases will have 10 known-but-unfixed releases unless significant effort is invested to backporting. Let's not undercut efforts to move people to more uptodate code. [Matej Cepl] Also, there are many (hi, [Theighohz3] !) who believe that a swiftly rolling release IS the best distro (I don't know his opinions on slowroll). I really don't think we can find consensus on this. [Theighohz3] I'm strongly opposed to this. Slowroll is a model where compared to Tumbleweed users will continually have known-issues for a period of time. Tumbleweed meanwhile had this issue addressed in a tiny fraction of hours. This xz issue should be taken as how we should be able to fix things as fast as Tumbleweed, not slow things down [Eugenio Paolantonio] the backdoor has been found on Debian unstable, so perhaps we should incourage TW usage even more (half-joking, but I think daily driving a rolling makes sense) [Paul McKeith] Isn't this already implied by the nature of the code? Maybe not for attacks such as this but certainly for new bugs which can be equally exploited and problematic. It seems to me that the decision to use TW for production has both benefits and risks that must be weighed by the individual consumer. I believe the nature of TW as a rolling release includes this implication and is the responsibility of the consumer / user to weigh the risks and decide how and when it is used. |
n/a | n/a |
35 | [Dirk Mueller] | remove linking of systemd from important daemons | systemd-notify support should be implemented in tree or in a small library rather than linking libsystemd | Yes | [Fabian Vogt] If it's really malicious, then a compromised libsystemd could fork off service processes in a way that the backdoor is forced in (LD\_PRELOAD and similar tricks)
[Adam Majer] The issue here was libsystemd dependency on liblzma which pulled in the backdoor. But this dependency remains in systemd proper and then you have scenario that [Fabian Vogt] described above. The solution is to go Rust/Golang and static link everywhere (sarcasm) [Eugenio Paolantonio] upstream is not interested in splitting (again): [22](https://github.com/systemd/systemd/issues/32028#issuecomment-2028723212) There is a work-in-progress patch to embed sd-notify directly in openssh: [23](https://bugzilla.mindrot.org/show_bug.cgi?id=2641#c28) Next systemd release will also dlopen() compression libraries rather than link with them ([24](https://github.com/systemd/systemd/pull/31550)) but of course the root cause (linking with libsystemd) will remain [Christian Goll] The attacker could have chosen any library which is pulled in by sshd, e.g. pcre. Blaming systemd is like saying github is insecure as the code was developed there. [Thorsten Kukuk] This would have **not** prevent the attack: if sshd would not load liblzma the attacker would have choosen another library. Would maybe made it harder for them, but not impossible. The long time they worked on the xz backdoor showed, that they were not under time pressure. And sd\_notify is not the only libsystemd function used by sshd (I know people like to ignore that fact) and coping the function source code from a common used library would only make it worse: who remembers in 3 years where all the code was copied if somebody found out there is a major problem with it: |
16 | [Cathy Hu] |
36 | [ie5feu2Aij] | establish better rapport to other distros' security teams | regularly meet up with the other distro's security teams IRL or via video chat and discuss how everyone improves their supply chain security | No | [ie5feu2Aij] it does not directly stop these kinds of attacks, but we would have maybe known it a couple hours earlier
[Filippo Bonazzi] +100, feels like something we should already have. we should know by name and have talked to at least a few people per distro sec team [Marcus Meissner] we have semi good working relationships already with other PSIRTs. Usually we get these via distros list on time. |
n/a | [Johannes Segitz] |
37 | [Hendrik Vogelsang] | Run our own CI [SCA](https://owasp.org/www-community/controls/Static_Code_Analysis) cycle against submitted source code | Instead of hoping that upstream does this... we require a/the right[SCA](https://owasp.org/www-community/controls/Static_Code_Analysis) tool, with our own ruleset / exceptions, to pass over `/home/abuild/rpmbuild` before a submit request on OBS is acceptable. | Perhaps | [Andrea Manzini] +1, related to #5 about data / binary injected in the build | n/a | n/a |
38 | [ie5feu2Aij] | pay PhD students to assess our distro's security | i know some companies are paying PhD students to develop exploits for their own software, they usually know the latest techniques | Perhaps/No | [ie5feu2Aij] i know, it is far fetched, but maybe an idea
[Bernhard Wiedemann] What is "their own software" here? And would that be Red-Teaming? [ie5feu2Aij] e.g. "find a way to successfully execute a supply chain attack on tumbleweed"; imo it differs since they are working full-time for a long period of time on exploits and they can get really sophisticated due to spending more time than the average red team [Lars Marowsky-Bree] This touches a few other points raised, but yes, investing into research (and then into putting said research into practice) can help and is definitely worthwhile, but is a very long-term commitment. |
30 | n/a |
39 | [Andrea Manzini] | harden TW default installation | by default installing in "desktop" role do not install sshd daemon; in "server" mode ask the user to specify a range of trusted IP addresses and open the firewall only to those. This would not prevent the attack per se, but will reduce the attack surface (e.g. installations forgotten, ssh listening on laptops without a reason, and so on) and at least would give the user more time to remediate / update safely. Valid for ssh as well for other services (http, mDNS, etc etc). Default values on a vanilla install must be on the safe side, then if the user wants it can of course change at will. | Partially / No | [Adam Majer] keep in mind that it's nice to use TW as a base of development container (or even production containers where software needs new deps) And in such devel container, it's nice to have ssh running for easy remote access. | n/a | n/a |
40 | [Lars Marowsky-Bree] | Monitor embedded test cases | e.g., autoconf introduced a permanently failing test that should have succeeded, and thus never enabled the Landlock sandbox check. If these tests and thus changing build flags were more prominently monitored, this would have been noticed more quickly. | n/a | [Lars Marowsky-Bree] I recognize they'd have then shifted the attack vector, but these can also break and suddenly enable "legacy" code by accident. A diff to the tests/build logs would make this more noticeable. | n/a | [Alexander Bergmann] |
41 | [up5Raile7u]
[Bernhard Wiedemann] |
Lobby governments for direct tax funding of open source software development / maintenance. | As the world becomes an unfriendlier place, reliance on being able to maintain the software we rely on, and safely, is not just a business concern but also a question of national security. For example from a European perspective it can be argued that it would be a "good thing" if most of the software (and hardware) we rely on can be reliably built, maintained, and audited on shore. | n/a | related: [25](https://publiccode.eu/)
[Bernhard Wiedemann] there are already some tech funds that will sponsor open-source devs, but many of them are concerned with features more than ongoing maintenance. [Lars Marowsky-Bree] The Sovereign Tech Fund explicitly is building up a program for Bug Hunting and Maintenance though. |
n/a | n/a |
42 | [Lars Marowsky-Bree] | Execution pattern heuristics | Fuzzily comparing the execution (syscall, library calls) traces between releases might have noticed that an unexpected call was introduced. (e.g., the additional system() using the malicious payload in this example, or even significant changes to latency of the traces) | n/a | [Andrea Manzini] quoting original oss-security email [26](https://www.openwall.com/lists/oss-security/2024/03/29/4) , when debuggers are detected, the malware does not trigger itself. From first preliminary reverse engineering, seems it uses a custom allocator as well [27](https://gist.github.com/smx-smx/a6112d54777845d389bd7126d6e9f504) . Looks like to actually trigger the execution of system() you'd need the private key of the attacker
Replying to [Andrea Manzini] Yes, this might also be run-time sandboxing and "learning" abnormal patterns automatically. Bernhard: Reminds me of heuristics used by Windows-Antivirus-software. These have a potential for false-positives, so we might not want to have this at runtime. Also loss of performance. So this makes more sense as a post-build test. I have sometimes diffed output from strace and it can be pretty ugly, if you don't control for various kinds of non-determinism (parallelism, filesystem order, random hash seeds...) (Lars: indeed, I first encountered this idea with a heuristic virus scanner on MS-DOS called Nemesis ) [Andrea Manzini] as pointed out on [28](https://github.com/amlweems/xzbot) , the sshd process tree looks different on successful exploitation. This can be another pattern to consider , ie. diff the side effects of executing the backdoor against a "normal" installation . [Ricardo Branco] The above pattern would be different in a double-fork, and PID 1 would be the parent of that process. Maybe systemd can be hardened against this? In Unix parents have no way of knowing they adopted a process. [Oliver Kurz] openQA would not have been detected as "debugger" here and could be used to check for certain expectations in a full system environment but still for testing purposes. |
n/a | n/a |
43 | [ie5feu2Aij] | blacklist/whitelist/alerts for commits by suspicious individuals | if we require would signatures, we could add a trigger that warns us if someone on the blacklist has added a commit or trigger us if some commits were added by a new contributor | No | related to #2+#13, more a general idea | n/a | n/a |
44 | [ie5feu2Aij] | test for memory management bugs | fedora found some valgrind errors in xz 5.6.0 that looked suspicious | perhaps | (i am not sure if QA already does that and it was fine because they were fixed in 5.6.1)
[Marcus Meissner] i checked, we did not see them here. But in general I would recommend much more curiosity if suddenly crashes appear. [Pedro Monreal Gonzalez] Building and testing "important" packages with Valgrind or ASan support would have be able to catch this. Also, double-checking for valgrind suppression code included in upstream projects in a regularly basis should be good for catching this kind of behavior. |
n/a | n/a |
45 | [Eduardo Minguez] | Implement OpenBSD's pledge on Linux | [29](https://man.openbsd.org/pledge.2) | Not sure | Honestly I'm not sure if this would have helped or not
[Bernhard Wiedemann] overlaps with #9 [Michal Koutny] Linux has seccomp instead. Maybe extend that instead of full-pledged implementation. |
9 | n/a |
46 | [Dirk Mueller] | heuristic for finding ifunc based control flow diversion | binarly is releasing a tool for this, see [30](https://www.binarly.io/blog/xz-utils-supply-chain-puzzle-binarly-ships-free-scanner-for-cve-2024-3094-backdoor)
apparently not open source, but a simpler version checking for ifunc relocations maybe could be built to scan for the attack path |
n/a | n/a | 19 | [Robert Frohl] |
47 | [Aex4oaj6ee] | emergency call for help for overloaded core project owner | The project maintainer was under pressure and overloaded. That was an attack point for the intruder to take over the project lead. That might happen again. Instead of handing over such projects to questionable persons the commercial distress should take over. | yes | [Timo Jyrinki] Yes, people tend to focus on technical evaluation, but socially evaluating projects where answer to the "is this in (dependencies of) the core?" is answered by a solid "yes" could have easily point out xz as a highly desirable target. This is what the attackers did. Someone mentioned libjpeg-turbo as another example, but I haven't reviewed that from this point of view. My non-scientific 2 min thinking work starting with ldd /usr/bin/zypper gave me also gpgme as a possible target (most of time out of people's mind, few developers) for a years long "contribution project" like what was done to xz. Possibly however nothing is as perfect target as xz's sole maintainer project with access to ssh via systemd was.
Duplicate of #6 (I love "commercial distress", well done!) [Aex4oaj6ee] I think that a call for help should also trigger a security review/investigation of the activity. A call for help might be an indicator for past, ongoing or future malicious activity. So monitoring of this should be acompanied with certain procedures and maybe even checklists. Communicating thos procedure should be done with caution since that would help APT Actors in adapting their strategy. The disstress line might be a target in itself. So constant frequent Aanlytics on those procedures should also be part of the process to be established. |
6 | n/a |
48 | [Hendrik Vogelsang] | Factory Ring 0/1 SUSE maintainer Policy (a variant of 6 & 16) | Forumlate SUSE maintainer requirements for all Factory Ring 0/1 (network exposed) packages and their dependencies that requires full upstream involvement of at least 1 FTE. Analyze what it would cost us and how we could satisfy this (re-focus of existing resources, new hires, less features, less releases) | No | n/a | n/a | n/a |
49 | [Santiago Zarate] | Push for binary/compressed generators for reproducers and flag binary/compressed between version changes files in VCS | Adding a compressed/binary file, immediately lowers the likelihood that a reviewer might look into the contents of how it was generated and/or its contents. While it is common to distribute reproducers as some [sort of binary](https://www.openwall.com/lists/oss-security/2024/01/18/2), sometimes the actual[procedure is known](https://bugzilla.redhat.com/show_bug.cgi?id=2258948#c0).
By flagging these kind of changes, we could start by using some sort of malware analysis for i.e images that are having some advanced steganography techniques, and pass to a second level once that has been cleared. More over, if we start supporting developers into having generators for binaries inside the tree, that could help immensely, as it is harder to hide malicious code in plain sight. |
Yes | (I think its similar to #6?)
[Andrea Manzini] Generally good idea but I'm not sure it would helped in this specific case. Binary files were part of the testsuite; totally legit that a compression library need some compressed files for testing and committed long time ago. Malicious payload was not in plain sight but spread on specific offsets and encrypted with multiple stage steps to generate the final object. Just looking at actual data file it's quite impossible to say it's malware code (see [31](https://lwn.net/SubscriberLink/967192/6c39d47b5f299a23/)) |
n/a | [Wolfgang Frisch] |
50 | [Jan Zerebecki] | distributed post-merge security reviews | Like [32](https://github.com/crev-dev/cargo-crev) but for for everything, not just cargo ecosystem. Public reviews signed with personal keys, use WoT to collaborate with other users of the same upstream sources, e.g. other distribution developers. Apply this to have distinct security reviews of architecture and processes, of upstream source releases, of package source changes.
Instead of only concentrating on finding exploitable problems, this will also need to try to push for better security practices. (RUSTSEC I think had some success pushing in that direction.) A lot of bad practices are known widely, but they are not tracked and can not be queried nor aggregated. To be able to have the amount of work be at all possible depends on ways to reduce the code that needs review: capabilities / sandboxing, reproducible builds, secure by design programming languages like rust, reproducible CI, including in CI tool that redo security proofs (e.g. there is I think an automatic way to show that[a certain soundness hole in Rust](https://github.com/rust-lang/rust/issues/25860) was not used in a project). |
Maybe, the general verdict is that security reviewers would have difficulties spotting it as the normal autotools code is hard to read; maybe such security reviews can create push back against unreviewable code and blobs | n/a | n/a | [Johannes Segitz] |
51 | [Simon Lees] | Help upstreams move to using Actions / Pipelines to automatically generate release tarballs. | By using Actions / Pipelines to auto generate release tarballs, it means that any changes to the tarball generation process will need to atleast be reviewed in the same way as code changes are. To make this completely effective it would require github to differentiate between manually uploaded tarballs and those generated via Actions / Pipelines. | Maybe (Github doesn't make it clear if release tarballs are auto generated) | n/a | n/a | [Cathy Hu] |
52 | [Michal Koutny] | Provide openssh alternative | openssh (as SSH server implementation) will be a valued target of various exploits in the future and the more, the bigger monopoly it has. Alternative implementation will likely not support the same sophisticated exploits like openssh and will not be targetted that much (security by obscurity). | No (if variant payload was shipped) | n/a | n/a | n/a |
53 | [Jan Zerebecki] | stop using insecure programming languages for new projects | Stop coding in insecure languages like c, c++, bash. E.g. if you code xz in rust with [33](https://github.com/bytecodealliance/cap-std/) it is much easier to review. Theoretically it would be possible to only review the public API and ensure security of the rest by static machine verification, but more research on this is needed and Rust is not there yet (see e.g. Rust issues labeled unsound). If you do not use the filesystem, network, clock, or a random source plain Rust without cap-std is enough. (There is[34](https://github.com/gendx/lzma-rs) with no unsafe use.) Though xz was started earlier than Rust was viable, but now it is possible. | Yes, but not at the time xz or openssh was started | [xae8eiQui6] In general, I totally agree with you, but - if I am mistaken, one might make me aware of it - unfortunately, the last time I checked, Cargo did not allow to resolve dependencies on the distribution level and, due to the unstable ABI, practically does not allow dynamic linking. Instead it pulls everything from crates.io linking it statically for each project. The only thing that is supported is putting everything (which was previously pulled from crates.io) into a local directory. From my perspective, this needs to be fixed before Rust can become a viable mitigation against memory management flaws without adding additional attack vectors on the other hand. (Anybody being able to break RSA 2048 in future, and by that, able to tamper TLS root CAs might be able to tamper the pulled crates in real time as well.)
[Andrea Manzini] sharing related thoughts from the lzma-rs author: [35](https://gendignoux.com/blog/2024/04/08/xz-backdoor.html) |
n/a | n/a |
54 | [Jan Zerebecki] | use capability based designs | This is a way to make sandboxing easier, see also the sandbox idea (9). While it can be used in some existing languages like Rust with [36](https://github.com/bytecodealliance/cap-std/) . And there are capability based languages like [37](https://github.com/microsoft/verona) . But there are also systems that use capability based designs to sandbox existing C code: [38](https://github.com/microsoft/verona-sandbox) and [39](https://cheriot.org/rtos/supply-chain/auditing/2024/04/04/cheriot-supply-chain.html) | Yes, but not at the time xz or openssh was started | n/a | n/a | n/a |
55 | [Dirk Mueller] | No downstream patches in critical projects | A significant factor of breaking "Linus Law" was that (open)SUSE carries significant patches in openssh downstream that were allowing the liblzma loaded via libsystemd into sshd. we should be more careful with patches, especially if they increase the attack surface | Yes | [Michal Koutny] Regularly return to upstream and re-submit downstream patches for re-evaluation. (To [paraphrase](https://mastodon.social/@pid_eins/112206258981426905) Lennart Poettering: "It only takes a state actor to have your patches merged.") | n/a | n/a |
56 | [Stanislav Brabec] | Provide/sponsor/create trusted tarball release service (makedist service)
Alternative: provide makedist service as a part of OBS. |
Currently, tarball releases on GitHub (and possibly other servers) cannot be trusted. Developer uploads a tarball created at home, which opens a backdoor for uploading a malicious code that does not exist in the Git repository, e. g. to autogenerated files that nobody reads (e. g. autotoold m4 files, configure etc.).
It is possible to create a trusted service with a trusted toolchain. The developer will provide a link to the git repository that contain makedist description: - The trusted service - The devel container that should be used. - Additional packages (if the makedist service supports it). - Commands that should be used to create distribution tarball. |
Yes, it will prevent adding backdoor to auto-generated files inside tarball | [eoy5Luth6X] this would be fullfilled by moving the tar ball creation as part of the build process. See Nr. 11 | n/a | [Johannes Segitz] |
57 | [Dirk Mueller] | do not link binaries that have .o code not build by us | Every .o code produced by our compilers need to have a marker (maybe a signature) that cannot be forged by an external attacker. then the linker aborts if .o modules are included that do not have a matching signature | Yes | [Jan Zerebecki] I think there is a feature in the Linux kernel to only run code that is signed, which would be the more general idea of this. Not sure how well suported this is in our general user-space yet. See keyword Integrity Measurement Architecture (IMA).
[Anthony Iliopoulos] IMA would not have helped, since the code (the final library output from rpm build) would be signed. [Marcus Meissner] simple idea: have compiler inject a build time unique random NONCE, but attacker could also mirror this kind. (but attacker could handle this.) Its hard to do this fully in a cryptographically strong fashion. [Anthony Iliopoulos] Also, even if we sign every compiler-generated binary object, it still doesn't prevent amalicious package to invoke the signing toolchain during build and sign the malicious object (orfeed it with obfuscated malicious source code). |
n/a | [Thomas Leroy] |
58 | [Jan Zerebecki] | package level isolation | There would still be one other way open for a package with a malicious build script. It could influence the package build so that the resulting package runs malicious code as root on installation. There are many packages with too few people to review all of them in sufficient detail. But most packages luckily do not need to run code as root.
Package level isolation or sandboxing could fix this. For details see:[40](https://github.com/affording-open/package-sandboxing) |
No, but any package with a compromised build script could do this | n/a | n/a | [Wolfgang Frisch] |
59 | [xae8eiQui6] | Freeze the versions of packages even in Tumbleweed after updating them after each SLES release | The suggestion is to freeze versions of a selection of specific packages identified as critical and change their release cycle, even in Tumbleweed. This would simplify manual in depth reviews and ensure the packages to be tested as long as possible before moving into any product.
Practically, almost just SSH and any linked libraries and anything that is able to interfere come to my mind, but additional packages might be identified when one takes an in-depth look. This would complement #13 (Package Scoring) and #15 (Manual Reviews). The packages would only be updated if this is required by compatibility, security updates or absolutely required features for the next release. One might even consider just backporting some patches instead of updating a package. |
Maybe | [Paul McKeith] FWIW, I really like this idea. It seems practical, easy, and effective general practice. Updates for compatibility could be an Achilles heel if not implemented with a very thorough "in-depth" review that follows all the paths to other packages that could be comprimised. | n/a | [Johannes Segitz] |
60 | [Stanislav Brabec] | Compiler signatures | Compiler could add a source checksum or signature to the debuginfo. Anybody can check whether the source code matches the compiled code.
The idea will not work for generated source code. It would be more complicated there. |
Yes, in some cases | n/a | 57 | [Thomas Leroy] |
61 | [Paul McKeith] | Layered defense in-depth policies | This may be obvious to this group of security focused personnel but I think it worth pointing out that multiple actions must be used in concert with each other. IOW, a layered defense-in-depth approach. Individually most suggestions cannot be 100% effective for this attack. Identifying weaknesses in any single policy is good but can be addressed with other policies. Some of the suggestions were marked as "no" in helping prevent this incident but when used togther with others I contend could have been effectitve or at least helpful.
So a more holistic view must be taken. Multiple policies used in concert with each can be effective but if not done carefully could also open other threats. Protections must involve both technical and social constructs together to combat the methods used in this attack. |
Required | n/a | n/a | n/a |
62 | [Hans-Peter Jansen] | rpm script execution exposure | Add a zypper option to enable shell tracing for rpm scripts ('sh -o xtrace') in order to make these operations more transparent, because those scripts are always executed with root permissions.
I believe, this will discourage offenders a bit at least, that plan to add malicious code in these scripts. Especially, if those trace logs are kept besides or within the installation log. |
No | n/a | n/a | n/a |
63 | [Dirk Mueller] | forbid encrypted zip | [Marcus Meissner] once sugested to bypass virus scanning by encrypting virus scanner detected malware by encrypted zip files.
this seems not a good idea. |
No | n/a | n/a | n/a |
64 | [Berthold Gunreben] | scan binaries for wrong architecture objects | If the binaries in OBS contain foreign architectures, this should be mentioned in rpmlint. Scanning could also happen for archived data (tar, zip, ar, ...). | n/a | n/a | n/a |