I added a couple of SSDs to my server today connected to a PCIe SATA card and it has slowed the PC to a crawl, so slow it times out trying to connect by RDP!
Sitting at the PC physically Task Manager doesn't show anything alarming but the PC is unusable.
All these cards seem to fit in the smallest of the PCIe slots which is described as PCIe 3.0 x 1. I also have 2 PCIe x 3.0 x 16 slots (x16 and x 4 modes).
https://www.amazon.co.uk/gp/product/B09FY9FBTN
which has 4 x host controllers still only needs 1 lane.
What do I need to look for to get a reasonable speed out of my SSD drives, is it lanes or host controllers or chipset or something else?
Advice appreciated, I can't remember when I had a PC running so slowly - it's in the middle of a job at the moment, I'll check what make the card is if it ever finishes...
RAID cards, you might run them in JBOD mode to use the thing
as a conventional SATA PHY.
These kinds of adapters are normally 300 to 500. A low price
like this is suspicious, unless these are going out of production.
There is no question that SATA is dying, even though hard drives
are not going away (HAMR 32TB, PMR 22TB).
https://www.amazon.co.uk/Highpoint-RocketRAID-840C-16-Port-Controller-Black/dp/B09MPTSGM7
PCIE x8 Rev3 (1GB/sec per lane)
16 SATA ports
Customer comments mention a driver signing issue. 64-bit drivers must be signed.
Win10 32-bit will be the last 32-bit way of getting the job done.
RAID cards with hardware support, have an XOR block as part of
the RAID thing. This normally results in a heatsink on top.
A "soft-RAID" card relies on an OS piece of code, to do the XOR,
the driver sparing or rebuild or whatever. Maybe some OS RAM
is used as a buffer for RAID file fetches (real RAID cards
have a DRAM DIMM for a cache module).
If the card has hardware support for RAID, this reduces CPU
overhead, and allows a compute box to do computing and disk I/O
in the same box. Adapters, even lowly ones, use DMA for transferring
file reads into OS RAM buffers, and this is what normally keeps
the overhead low. Your erroneous result indicates something is
not right, either an interrupt storm, or a driver level issue.
For example, maybe somebody has been shipping a WinXP driver
with their hardware since yonks, and now it decides to act up
on Windows 11 :-)
*******
You can measure interrupts in Windows, easily, but the
graph scale calibration is a mystery meat. The graphics card
test used to interrupt at 2000 interrupts/sec.
There is a difference between DX11 and DX12 on the interrupt
rate you expect. DX12 would be lower than DX11. I have not
tested or calibrated graphics across the ages -- I could have
done that, using my WinXP...Win10 machine but that machine
is dead now and my analysis range is now "crimped".
I'll just toss this out and you can play with it. A graphics
card used to be 2000 per second. The interrupt limiter (per card)
was around 10000 to 15000 per second or so. In this demo,
the NVidia SmokeParticle.exe in the Visual Studio SDK they provide,
was used to generate a load so I could "step" the interrupt rate
on demand. Yes, you have to compile from source, to get the EXE.
There are other ways, but interrupt characteristic unknown
(Furmark or a 3DMark run). In any case, when the system is idle,
there might be a few USB interrupts when your mouse whizzes around.
[Picture] Using perfmon.msc to measure Interrupt rate
Loading Image...Paul