I think that those who say this (decisions by machines) refer to different kinds of decision than what you said. I don’t think anyone (TVP, TZM, or others) think that an algorithm should decide how the educational system should look like, how to treat other people or animals, and so forth. You can make algorithms to suggest answers for these questions, but is up to people to decide upon such things.
Think about a treatment for cancer: even today there are AI’s used in arriving at decisions in regards to treatments for cancer, and these are life and death decisions. But these AI’s will not decide or force any treatment on anyone. They are also programmed by doctors to, perhaps, increase the healthy lifespan of the patient, not only the survival. So the AI is human flavored (the more scientifically flavored the humans, the smarted the AI). Then the patient can be shown the alternatives: X treatment has a longer survival rate but has these side effects, while Y treatment has a less survival rate though the side effects are not so bad. Then the patient decides what treatment to undergo. So the relationship between computers and people is complex and depends on the domain of study/implementation.
For instance a software can be tested to see if it is safer to drive a car than a human is, and if so then the software is chosen to drive cars (self driving cars) instead of humans.
So computers can be programmed to find better educational systems, or even how to best treat animals so that they won’t suffer, but these are all suggestions coming from a software designed by human minds, and only a scientifically minded society can make the best out of these suggestions. But it depends from situation to situation.