I would argue a publicly auditable software stack would be a strong alternative to the self audited stack. I run a completely open source OS and run all non open software on a machine I don't trust.
If someone can't have that then surely it would at least be good to a system that doesn't autorun things automatically, and stops common attacks like bootloader virus, email virus, etc...
I think AV is meant to deal with "minor tactics" like stopping things from autorunning or blocking common kinds of self replicating code and perhaps stopping known bad things.
That blacklist approach most AV takes can never guarantee security, but maybe some of the time it helps.
I would argue that almost all FOSS is insecure and many (OpenSSL) have had easy to spot vulnerabilities for years. The important part of closed or open software assurance is review. People also often focus on the open or closed part as if it's a dichotomy rather than a spectrum. To help, I wrote an essay illustrating the security levels offered at various points in spectrum of open vs closed source here:
That's what secure takes against even black hats these days. It can be simplified with a strong TCB, better hardware, and better languages + toolchains. The problem is that only a tiny few projects in FOSS are doing that and not many more in commercial. Whitelisting, stack canaries, AV, firewalls... this is all just added complexity around the root problem that hackers bypass regularly. It isn't security except against the incompetent.
Getting the real thing might require throwing away a lot of code or apps. Or virtualizing it on secure architectures with crazy good interface protections. That's why market as a whole won't do it. Good news is there's small players making such things: eg Turaya Desktop, GenodeOS, CheriBSD, Secure64 SourceT. We'll get more over time but it would help if waves of FOSS coders invested in stuff that provably works instead of what holds them back. GenodeOS, L4, and MirageOS communities are only ones I know doing it at endpoint these days.
I'd agree with that argument for the general case. Yet, there have been proprietary systems that resisted attacks in their attack model (with source code!) for years and all were designed with established methods for increasing assurance. There's dozens of done that way, esp in defense and smartcard markets. There's a few OSS projects with either good design or code review (medium assurance) that were done by pro's and open-sourced. Far as the actual FOSS development model, there are zero high assurance security offerings done that way. That's despite decades of examples with details published in journals, on the web, etc to draw on. So, FOSS has never done high security, NSA pentesters did give up on a few proprietary offerings, and therefore FOSS is inferior to proprietary in high security because only one has achieved it. Matter of fact, the open-source, commercial MCP OS of Burroughs was immune to pointer manipulation and code injection in 1961 via two bits of tag. FOSS systems haven't equaled its security in five decades.
They need to catch up really quick because they could be the best thing for high assurance. The mere fact that there's tons of labor, they're free, and not motivated by commercial success avoids the main obstacles to high assurance, commercial development: that the processes are labor-intensive, difficult to integrate with their shoddy legacy stuff, and hard to sell. If FOSS ever groks it, they could run circles around the other projects and products in terms of assurance. Closest thing is the OpenBSD community but they use low-assurance methods that lead to many bugs they fix. Their dedication and numbers combined with clean-slate architecture, coding, and tools would produce a thing of beauty (and security).
And, yet, the wait for FOSS high assurance continues. If you know anyone wanting to try, Wheeler has a page full of FOSS tools for them to use:
And when they are found they are fixed and the community is always outraged.
When a closed source project has a bug in it, sometimes the knowledge of that bug is kept hidden. Maybe most of the time it is handled responsibly, but without oversight how can an outsider tell?
You mean for those few FOSS projects that both get plenty code review and fix those bugs? Sure those probably are better off than average proprietary. Much worse than proprietary niche that's quality-focused. Yet the community isnt outraged enough to use low defect processes to prevent the next set. Further, that both FOSS and proprietary focus on getting features out quickly with few review ensures plenty of bugs in both.
The trick to either is the committment to quality/security is real, each commit is reviewed before acceptance, and independent verification is possible. With proprietary, the confirmation can come from a trusted third party, several third parties (mutually suspicious), or source provided to customers (but still paid).
In the long the source being publicly available means the bug will be found.
> Much worse than proprietary niche
I disagree, but even if I didn't how can the average purchaser of software discern quality software from junk. If they had the source, they could pay an expert.
I agree a commitment to quality, and therefor security, is important. But I feel that if all other things are equal the open source software will always have an advantage over the closed sourced software.
Reliability, determinism, and security vulnerabilities are a good start for the purchaser. For the reviewer, we already know what methods [1] historically improved the assurance of software. Every method they added, the more bugs they found. That most proprietary and FOSS software use little rigor is why they're insecure. Only a few proprietary or academic offerings, not community driven, had the rigor for the B3/A1/EAL6/EAL7 process. I give examples here [2] for those that want to see the difference in software lifecycle.
Can you name one FOSS product designed like that? Where every state, both success and failure, is known via design along with covert channels and source-to-object code correspondence? I've never seen it. Although, it has happened for a number of proprietary products whose claims were evaluated by NSA & other professional reviewers for years straight without serious flaws found. So, for high security, the "proprietary niche" that does that has beaten FOSS by far and mainstream FOSS is comparable to mainstream proprietary in quality (i.e. priorities of provider matters most).
FOSS can potentially outdo proprietary in highly assured systems given they have free labor. In practice, they do whatever they feel like doing and so far that's not using the best software/systems engineering practices available. So, I don't trust FOSS any more than proprietary except in one area: less risk of obvious subversion if I verified transport of the source and compiled it myself. Usually plenty of vulnerabilities anyway, though. Would love to see more high assurance efforts in FOSS.
I would argue a publicly auditable software stack would be a strong alternative to the self audited stack. I run a completely open source OS and run all non open software on a machine I don't trust.
It sat out there for a long time and was fixed. All parties involved were notified. There was never the opportunity for anything else to happen. This is the nature of open source, no room for deception in the long run.
If the same kind of bug (major impact, wide distribution and long exposed history) existed inside the code of microsoft, apple or oracle code no reasonable person would think that the company responsible would let that out with details on impact level. The hit to stock prices would be enormous. They would silently issue a patch and hope no one notice and likely no one would because there is no oversight. There is room for deception built in, even if it is not intended as such.
I am aware that companies do patch and do frequently notify, but they rarely let all the information out for public consumption. The larger the issue the more they downplay it. For how many years did the buffer overflow in the ie6 address bar or windows shatter privilege escalation attack remain vulnerable in windows.
Shatter was first described in windows xp before 2002 and was still present when windows xp reached end of life. The people affected never had any say and no one outside of microsoft ever had any opportunity to fix it.
If someone can't have that then surely it would at least be good to a system that doesn't autorun things automatically, and stops common attacks like bootloader virus, email virus, etc...
I think AV is meant to deal with "minor tactics" like stopping things from autorunning or blocking common kinds of self replicating code and perhaps stopping known bad things.
That blacklist approach most AV takes can never guarantee security, but maybe some of the time it helps.