You are confusing MTTF with lifetime. SSDs have a write cycle lifetime, as well as random failure modes. MTTF and MTBF are only valid mid-life, when failures tend to be random. They are not useful in the wearout phase (or the infant mortality phase) of the bath tub curve (failure rate curves are said to look like the shape of a bath tub).
It is possible that designers are getting good at wear levelling which tries to ensure that writes to the same logical block don’t actually happen to the same physical one, but there is still a write related wearout mechanism.
SSD also have a data retention lifetime. That’s as low as 5 years if powered down, but it is possible that the latest ones actively refresh when powered up.
I’m retired, and when employed I was a software developer, so I haven’t had to make that decision, but SSD is basically best for read mainly applications, so good things like executable files and fixed announcements, or where speed or mechanical resilience are critical. The key point, that I was making is that a 171 year MTTF doesn’t mean it will last 171 years.
It’s real use is when you have large enough numbers of the product in service that several will fail in a year, and allows you to know how many spares you need, and to cost the impact of down time.
MTBF and MTTF are confusing terms, and advertisers often rely on that confusion to give a false impression of true lifetime.
Backblaze, who publishes drive/store performance reports, found that HDD’s start having a 3%+ failure rate at year 5 and it increases to 5% in year 6 and keeps rising each year. Now they’ve been tracking HDD longer so they only have about 5 years worth of data on SSD but in that same 5 year span, 1%. By year 5 (and counting 6 this year) they see 1% or less failures. Even in year 1, HDD’s had a .66% failure while SSD had 0% failures.
The benefits of SSD outweigh HDD and since 2018 the cost of SSD’s has dropped big making them more cost friendly for people. Which was probably the biggest factor in them not gaining traction in the consumer market for a long time, costed too much.
We’ve been using SSDs exclusively in our installs for years now. Not going to say that none have ever died but it was maybe one or two?
To hedge against failures we have regular backups of our systems and we are able to typically stand up a new SSD and restore from backup within two to three hours.
Again happened to maybe two in the last 6 to 7 years and one of those I think was an mSATA drive, which we don’t use any longer.
For me ssd’s if using hardware for reasons of power consumption and speed and nois, but raid1 two of them to make reliability an order of magnitude or more better.
Hell… if you’ve got space and budget: raid 10 with four enterprise WD/Crucial/Samsung SSDs. Size it 4x what you think you’ll ever need and you’ll probably never touch the box again besides PSU or motherboard failure.
To be clear: I’m saying the original question: “Is SSD recommended” is a no-brainer these days, considering backup and recovery are faster as well. Just don’t buy no-name SSDs marked 50% lower than the rest . . . and be sure the rest of your build warrants the cost