[SOLVED] Linux/Windows not detecting Samsung EVO 970 with m.2 NVMe adapter

Apr 8, 2021
7
0
10
Hello everyone :)

Recently I've bought a 1TB Samsung EVO 970 NVMe to go alongside my 500GB NVMe (and 2TB HDD, I honestly don't know if it is relevant to the problem I am having).
The thing is, my mobo (MSI Z390-A PRO) only has 1 m.2 slot so I had to buy an m.2 adapter (Orico PSM2 SSD adapter) to put it in x16 PCIe slot.
The problem with this setup is that bios is seeing the NVMe that is connected with adapter, but neither Linux nor Windows are seeing it. I've tried everything i could have and I ran out of ideas.
Any help would be appreciated!

Thanks everyone in advance :)
 
Solution
Here is a link. BIOS is detecting the disk, but neither the Linux nor the Windows does

When you run the "lsblk -f" command you are looking for block devices, which is what the actual hard drives are. Prior to this PCI must also see the device, and this is likely what the BIOS is seeing. If PCI fails, then lsblk cannot see the drive. If PCI succeeds, then lsblk sees the drive if and only if the actual SATA wiring is working with sufficient quality (which is different than if the PCIe wiring is working with sufficient quality...it is a chain of dependencies).

Since this is confusing, I'll restate that PCI can succeed or fail (including for simple signal quality issues rather than outright failure), and whether or not SATA...
Apr 8, 2021
7
0
10
Can you show screenshot from Device Manager (disk drives and storage controllers sections expanded) and
screenshot from Disk Management and
screenshot from Windows storage spaces?
(upload to imgur.com and post link)

What version of windows btw?

I currently have only Linux Manjaro but this is tested on Windows 10 PRO also. Here are screenshots from GParted on Linux View: https://i.imgur.com/jmRnwMI.png
and View: https://i.imgur.com/hOuD1uR.png

Here is from bash disk -l command View: https://i.imgur.com/qiGEvFY.png
 
Here is a screenshot from the commands

According to "lsblk -f" both an old style hard drive and a solid state drive are connected, visible, and working. The NVME has three partitions in a fairly standard Linux install. The old style hard drive is purely a Windows type install, but I suspect it is not a "bootable" install (e.g., it might be ordinary data) because the first partition of the old style drive is neither a VFAT or UEFI partition. Perhaps it could be a legacy BIOS system instead of UEFI, don't know.

Unless there is a third hard drive involved, then "lsblk -f" under Linux verifies the drives are present and available for use. Is there a third drive?
 
Apr 8, 2021
7
0
10
According to "lsblk -f" both an old style hard drive and a solid state drive are connected, visible, and working. The NVME has three partitions in a fairly standard Linux install. The old style hard drive is purely a Windows type install, but I suspect it is not a "bootable" install (e.g., it might be ordinary data) because the first partition of the old style drive is neither a VFAT or UEFI partition. Perhaps it could be a legacy BIOS system instead of UEFI, don't know.

Unless there is a third hard drive involved, then "lsblk -f" under Linux verifies the drives are present and available for use. Is there a third drive?

Yes, there is a 3rd drive connected.. A 500GB Samsung EVO 970 NVMe that is connected through an Orico PSM2 SSD adapter.

I would move the adapter to another machine to see if it works.

What is the make and model of the 500GB disk?

Just to be sure post a link to the adapter.

Here is a link. BIOS is detecting the disk, but neither the Linux nor the Windows does
 
Here is a link. BIOS is detecting the disk, but neither the Linux nor the Windows does

When you run the "lsblk -f" command you are looking for block devices, which is what the actual hard drives are. Prior to this PCI must also see the device, and this is likely what the BIOS is seeing. If PCI fails, then lsblk cannot see the drive. If PCI succeeds, then lsblk sees the drive if and only if the actual SATA wiring is working with sufficient quality (which is different than if the PCIe wiring is working with sufficient quality...it is a chain of dependencies).

Since this is confusing, I'll restate that PCI can succeed or fail (including for simple signal quality issues rather than outright failure), and whether or not SATA can possibly succeed depends on this when going through PCIe slots. Once the PCIe succeeds, then there are similar signal quality issues for the SATA wiring (and SATA signal quality can succeed or fail even if the device is technically functional...it is the topic of RF signal quality and this changes depending on lane routing, lane shape, external noise sources, so on).

So under Linux I will suggest looking at the output of "lspci". Many devices will be shown, but if you can find lspci hard drive entries which you trust are for hard drives, then you've proven that the PCIe side is functioning (for those specific devices which go through PCI...not all SATA must go through PCI, some might be wired directly to memory controllers for example, and a single SATA controller might deal with multiple hard drives and thus show only one entry despite two drives being attached); from that you can look at a more verbose version of lspci and see if the PCI says there is any error. If no PCI error, then the issue is between SATA and PCI. Here is a command to somewhat shorten the list of what lspci will show, and is more likely your device will be somewhere in that list (since we know it is in a PCIe slot via adapter):
lspci | egrep -v -i '(ethernet|vga|usb)' | egrep -i '(controller|sata|nvme|orico|psm2|ssd)'

Do you see your specific device in that lspci list? If so, then the PCI is seen, and lsblk fails because the drive-to-adapter card is failed (possibly for no other reason than signal quality). In this case I would expect the BIOS to see the device, but the operating systems would not show any block device via lsblk.
 
Solution
Just an added note on this: During boot on Linux you would normally see NVMe devices enumerated starting at device 0. Example:
/dev/nvme0
(this would have subdevices, e.g., "/dev/nvme0n1p1" is the first partition of "nvme0")

I see your visible device is "nvme1n1". This means something else enumerated as the first disk, "nvmen0n1", and then the device disappeared. If there is a third NVMe, then it would be "nvmen2" (and subdevices). Unless the udev mechanism is for some reason renaming devices (never heard of it doing this for an NVMe, it probably wouldn't interfere or interact with an NVMe) this implies you had another NVMe initialize, then then disappear, and then the next device came up (working) as "nvme1n1". The third NVMe, like the first device at "nvme0") also does not show up.

A boot log could add some details, but I'm not sure if there is much you would be able to do about it. Typically you'd create a log via something like:
dmesg 2>&1 | tee log.txt