ZSystems/QAndALinuxONE

Jump to: navigation, search

Q & A For LinuxONE Instances

This is supposed to be a collection of questions and answers a new user might have when first working on a LinuxONE instance.

Various Questions (And Their Respective Answers)

=== Question: As a Linux user, all tools look the same as on my 'normal' laptop/PC/... So, what's different? What are the s390x-specific things I've got to watch out for? ===

Answer:

You won't notice, unless you care to explicitly look for it - with commands like df, dmesg, lsmod. See the the bullet items below for additional details:

  • Byte order is BIG endian
  • See Wikipedia article on endianness for details. Network byte order has always been big endian (so endianness was "unified" across systems, actually making big endien "more natural" in some way - at least when exchanging data over the "wire").
  • Various z-related hints in dmesg output
  • setup: Linux is running as a z/VM guest operating system in 64-bit mode
    [...]
    [    2.862483] zpci: PCI is not supported because CPU facilities 69 or 71 are not available
    [...]
    [   13.968627] systemd[1]: Configuration file /usr/lib/systemd/system/zvmguestconfigure.service...
    [...]
    [    8.195237] dasd-fba 0.0.0100: New FBA DASD 9336/10 (CU 6310/80) with 102400 MB and 512 B/blk
    [    8.269081] qeth: loading core functions
    [    8.290952]  dasda:(nonl) dasda1
    [    9.445010] EXT4-fs (dasda1): mounted filesystem 8b6b381b-7e74-4634-ab7e-330b20ab1b6d with ordered data mode. Quota mode: none.
    [...]
    [   17.960600] qeth: register layer 2 discipline
    [   18.014944] qeth 0.0.1000: CHID: ff00 CHPID: 0
    [   18.029199] qeth 0.0.1002: qdio: OSA on SC 2 using AI:1 QEBSM:0 PRI:1 TDD:1 SIGA:RW 
    [   18.064899] qeth 0.0.1000: Device is a Virtual NIC QDIO card (level: V724)
                   with link type Virt.NIC QDIO.
    [   18.087252] qeth 0.0.1000: MAC address 02:c1:21:88:50:bc successfully registered
    [   18.181707] vmur: z/VM virtual unit record device driver loaded.
    [   18.282210] qeth 0.0.1000 eth1000: renamed from eth0
    

    Summary: z/VM (vmur), zpci (not supported due to missing CPU facilities[1]), qeth, dasd, zvmguestconfigure.service

    [1]: Those facilities 69 and 71 are not missing in the CPU itself; they're "just" not enabled in the z/VM (at least not for our LinuxONE instances)

  • From dracut: different modules for initrd image:
  • dracut: *** Including module: zipl ***
    dracut: *** Including module: dasd_mod ***
    dracut: *** Including module: dasd_rules ***
    [...]
    dracut: Stored kernel commandline:
    dracut: rd.cio_accept=0.0.1000,0.0.1001,0.0.1002,0.0.1000,0.0.1001,0.0.1002
    dracut: rd.dasd=0.0.0100
    dracut:  root=/dev/disk/by-path/ccw-0.0.0100-part1 rootfstype=ext4 rootflags=rw,relatime
    
  • s390x systems have two types of disks. the usual ones like /dev/sd[a-z][0-9]+ and the other ones like /dev/dasd<x>. What you see depends what the system administrator has configured.
  • cat /proc/cpuinfo or lscpu output differs greatly (should come as no surprise, as we're not on x86_x64, but still worth mentioning)
  • (Analogous for uname:)
  • uname -m
    s390x
    
  • Watch out for s390-specific modules in the output of the lsmod command. More precisely:
  • lsmod | grep s390
    s390_trng              16384  0
    crc32_vx_s390          16384  2
    ghash_s390             16384  0
    chacha_s390            20480  0
    libchacha              16384  1 chacha_s390
    aes_s390               28672  0
    des_s390               20480  0
    libdes                 28672  1 des_s390
    sha3_512_s390          16384  0
    sha3_256_s390          16384  0
    sha512_s390            16384  0
    sha256_s390            16384  0
    sha1_s390              16384  0
    sha_common             16384  5 sha3_256_s390,sha512_s390,sha256_s390,sha1_s390,sha3_512_s390
    rng_core               24576  2 zcrypt,s390_trng
    
    lsmod | grep dasd
    dasd_diag_mod          20480  0
    dasd_fba_mod           20480  4
    dasd_mod              172032  4 dasd_diag_mod,dasd_fba_mod
    
    lsmod | grep qeth                                                                                                               0:53:16
    qeth_l2                61440  1
    qeth                  155648  1 qeth_l2
    ccwgroup               24576  1 qeth
    qdio                   53248  1 qeth
    bridge                344064  2 br_netfilter,qeth_l2
    

    (I'm aware that those may not be all s390-specific modules. In case I missed some, please let me know. TIA.)

Important pkg to find out more about your s390x/LinuxONE instance: s390-tools

Question: lscpu tells me that the s390x architecture is also affected by Spectre v1/v2. How come? Aren't Spectre vulnerabilities limited to Intel CPUs and thus the x86-64 architecture?

Answer: I got very interesting answers from Marcus Meisser (a member of the SUSE security team) via various mails. I asked him for permission that I may quote his answers here:

lscpu outputs what the kernel reports via sysfs in
/sys/devices/system/cpu/vulnerabilities/, so it is "truth".

Some details:

Spectre v1 is a class vulnerability affecting all modern CPUs that have
transient execution. (e.g. where the CPU speculatively executes code
ahead and does data prefetches based on speculative results).

Spectre v2 is also a class vulnerability affecting some classes of CPUs,
including IBM Z, where transient execution / speculation happens over
indirect (computed) jumps.

Both affect and need to be addressed not just in host also in guest kernels.

- Spectre v1 will likely never be fixed CPU wise, as this would otherwise
  lose LOTS of performance.

  Instead software has been made responsible to avoid these Spectre v1
  problem code patterns on selected privileged software parts, like the
  Linux Kernel.

- Spectre v2 can be fixed CPU side, but depending on Vendor and their
  schedule. Until the CPU is fixed, or if the vendor decided not to fix
  the CPU, again specific workaround code patterns must be used in
  privileged software.


As you see in the output above "Mitigation: ..." means that the guest
kernel has been supplied with mitigations / workarounds for both issues
and so is not affected anymore.
Spectre v1 will NEVER be fixed on the CPU side as I wrote, as
speculation is a core part of the performance of current CPUs.

So for Spectre v1 there is always need for Mitigations / adjusted
Software.

For Spectre v2 it depends on the CPU vendor, they might also chose
to require specific software adjustments, perhaps this is the final
state on IBM side.

I did some quick google but it did not find the IBM Z guidance on Spectre v2.
I think etokens are the final mitigation for it though.


So to be clear:

- CPUS continue to be AFFECTED (intentionally to not lose performance).
- Software is however applying MITIGATIONS to these issues where needed.

I would be interested on a pointer to IBM Z on Spectre v2 programming
guidance, but otherwise I think everything is right.

Regarding my remark:

> The "all modern CPUs" is decisive here - I honestly wasn't aware of that
> (read: I had the (obviously false) impression that it's an "Intel only"
> thing). So, if I draw the right conclusion: There's NO modern CPU that's NOT
> affected by Spectre v1/2 vulns by default - it concerns ALL CPUs by default.
> Is my conclusion correct?
Spectre v1, yes.

I am not 100% up to date on all CPUs, but this kind of transient
execution is usually a "performance feature".

        Intel, IBM Z, IBM Power, Arm AArch64 do that at least.

Spectre v2 ... it depends a bit on the CPU, not sure if all of the
modern ones still have the problem.
 (e.g. if they have a speculative barrier on an indirect jump)

Regarding my question:

> But why can a software solution in the kernel get by (almost) WITHOUT performance
> loss whereas if the bug were fixed in CPU microcode, it would cost LOTS of
> performance?
In the CPU you would need to stop the speculative execution at nearly any kind
of data accesses as the CPU does not know which could be exploited and
which not.

These days, the speculative pipelines can look 100+ instructions ahead
and do computations already.

If you would stop it, the look ahead would go down to perhaps something
around 5 instructions, leading to a severe loss of performance.

On the software side though there is an additional fix. Simplified:

Instead of:

        if (a < maxsize) {
                return b[a];
        }
where the speculation would already access b[a] regardless of the check
on a, the new code does:

        if (a < maxsize) {
                a = a & bit_mask_based_on(a,maxsize);
                return b[a];
        }

The bit_mask_based_on() function is a weird function, which will return
0 if a >= maxsize, and a full sized 1111 pattern if not.

include/linux/nospec.h from the Kernel:
         return ~(long)(index | (size - 1UL - index)) >> (BITS_PER_LONG - 1);

so basically the speculation takes also the array size via a logical
expression and this removes the out of bounds speculative read possibilities.

This piece of code causes a very slight overhead, but fully avoids the
speculation overread. A out of bounds index is never going into the
address calculation as it is would be masked to 0.

Question: For the x86_64 architecture, there's plenty of documentation available on the meaning of the CPU-specific flags (found in /proc/cpuinfo output). But how about something analogous for s390x?

Answer: Thankfully, Ihno Krumreich has provided a list (originally via mail; I explicitly asked for permission to (also) publish it here (thnaks, Ihno! Much appreciated!):

s390x-Specific CPU Feature Flags
Flag (Abbreviation) Meaning (Short Explanation)
dflt Hardware-accelerated compression
dfp Decimal-Floating-Point Instructions, see chapter 20, Principles of Operations
edat Enhanced Dynamic Address Translation-2 (edat as a feature indicates 1 MB huge-page support).
msa message-security assist (Support for signature and Data-Encryption-Algorithm) => Shows that CPACF is installed.
sie Start Interpretive Execution (the Linux instance can be a hypervisor; a z/VM instance)
stfle A list of bits providing information about facilities is stored beginning at the doubleword specified by the second operand address.
vx indicates that the Vector Extension Facility is available
vxd Vector-Decimal Facility
vxe Vector-Enhancement Facility 1
vxe2 Vector-Enhancement Facility 2
vxp ? (TBD; most likely also somehow vector-related)

File:Z-arch-prinz-of-op 12 Edition 2017-09 z14 dz9zr011.pdf

Question: Several desktop environments/window managers/graphical applications have (also) been ported to s390x. How can I use them (remotely)?

Answer: See ZSystems/GraphicalApplicationsLinuxONE with regard to using the virtual frame buffer X server (Xvfb; needed for obtaining a DISPLAY) in conjunction with a VNC server (automatically connects to the first DISPLAY found) and an SSH tunnel (needed in order to prevent transmission of the VNC passwd in clear text)

TBD: Wait for an answer from IBM concerning how the emulated graphics card has been tested AND whether it's available on our LinuxONE instances (or only from within kvm or qemu, respectively.)

Question: How can upstream open source projects integrate s390x tests in Github Actions as an CI/CD?

Answer: IBM has provided mainframes for Github and Github Actions in the background. You can integrate tests for different architectures. You can find the tutorial in a Github reference guide.

Question: Why is the s390-tools package also compiled for the x86_64 architecture when it's actually indended for/useful on only the s390x architecture?

Answer: (provided by Miroslav "Mirek" Franc on the ZSystems mailing list; thanks/shoutouts/kudos to him ;-) ):

The reasons are the IBM secure execution features[1] related utilites such
as managing ultravisor secrets (pvsecret utility), generating secure
execution images (genprotimg utility) and managing attestations
(pvattest utility) which can be done from x86_64 workstation.[2][3] In
theory from any other architecture, but I assume x86_64 is the most
important one.


[1] https://www.ibm.com/docs/en/linuxonibm/pdf/lx24se04.pdf
[2] https://www.ibm.com/docs/en/linux-on-systems?topic=execution-attesting#t_attest
[3] https://video.ibm.com/recorded/132127046

Links: