On Fri, 6 Sep 2013, Catalin Marinas wrote:
On Fri, Sep 06, 2013 at 02:50:45AM +0100, Nicolas Pitre wrote:
I understand the issue with having a secure OS that needs to protect itself from the nasty Linux world. However, if I understand the model right, the secure OS is there to provide special services to the non-secure OS and not the reverse. Therefore the secure OS should simply pack and hide its things when told to do so, right?
The problem is when it is *not* told to do so.
Well, just halt the whole system in that case. Or raise a fault if you want to be nice.
This is like memory protection in user space: if you don't ask the kernel for extra memory before touching it, you get killed. But the kernel doesn't interfere with user space for the managing of that memory beyond giving it blank pages.
If the non-secure OS is allowed to disable coherency (at the CCI level or simply by shutting down a CPU in a cluster) *without* the secure OS being informed, the trusted model is broken (or has to include the non-secure OS). In a more paranoid world, this part must be moved into the secure firmware and there is no way to do it without a similar "last man" state machine. It is probably hard to create an attack but random data corruption in the secure OS is not something that can be ignored.
As I said, the secure OS model should imply veto power only, not executive power. That ought to be sufficient. The non-secure world can learn to inform the secure OS of its intent.
Of course option 1 is the most flexible in terms of implementation efficiency, but it has drawbacks as well.
Too much flexibility also has drawbacks and we have the ARM SoC past experience - code duplication, difficult single zImage, people asking for machine quirks in __v7_setup (though we managed to prevent them so far).
No no... code duplication and difficult single zImage are not arguments I'll buy. That solely has to do with project structure and code design. Proof is that we're getting there now despite varied machine architectures, and if we had the men power at the time we could have done it from the start. The difficulty is in changing established habits, just like this secure OS model.
A unified approach (like standard firmware interface) should be the default and we can later relax it if there are good reasons. This unification is more important for the server distro space as the mobile space tend not to contribute that much back into the kernel.
Again this is a fallacious argument. The fastest growing architecture in the Linux kernel is ARM32 at the moment, and that is mostly about the mobile space.
It is true that the server space is not concerned as deeply about power management as the battery powered mobile space. The time-to-market pressure and life cycle are quite different as well. So that tends to favor a standard firmware interface. OTOH servers aren't as much into cost cutting to the point of reducing the number of MCUs to zero and put the equivalent functionality into TrustZone. That helps keeping firmware simple.
And this is not my call to make either. System vendors will choose their own poison for themselves. Between risk inducing complexity in secure firmware and non-standard low-level machine specific L3 calls I don't think there is much to rejoice about.
As I said above, this complexity in the firmware is required to *increase* security.
IMHO this statement is a non-sense and a clear indication that something somewhere was not designed properly.
Nicolas