A colony of autonomous robots is sent to terraform a planet to prepare it for humans. Let's assume that while the robots are sophisticated and skilled at figuring out how to do their assigned task, they are not what anyone would consider sentient or conscious.
Say there is a landscaping robot and a cable-burying robot (among many others). The cable laying robot's algorithm doesn't take into account that burying a cable might mess up the work of the landscaping robot -- all it considers is whether it gets its cable buried. This isn't so much "on purpose" as it was just the simplest way to program it. At first it seems to work pretty well when there are only a few robots spread over a wide area.
Eventually though, back on earth they see that the robots' work is not getting accomplished as efficiently as it would if the robots were able to take into account the goals of the other robots. For instance, they notice that right after the landscaping robot completed a lovely Japanese garden, the cable laying robot dug right through the middle of it to place its cable, requiring a lot of extra work for the landscaping robot to fix. So earth sends the robots a software upgrade -- let's call it the "don't screw over the other bots" module -- that allows them to communicate among themselves, and to take into account the goals of the other robots, thus slightly decreasing the prioritization their own specific goals.
Now the cable robot can figure out that its cable-laying goal potentially conflicts with the goals of the landscaper. Once it takes into account the landscaper's goals, it calculates that it can still accomplish its own goal, albeit somewhat less efficiently, without interfering nearly so much with the goals of the landscaping robot. All it needs to do is to schedule the cable laying such that it will usually happen in areas that haven't yet been landscaped, and to occasionally find alternate routes so as to avoid certain "highly landscaped" areas that will cause the most work for the landscaping robot to re-beautify. For instance, it might spend an extra two hours of its own time going around the outside of a Japanese garden, so the landscaper doesn't have to spend two days repairing the damage of it going right through the middle. Importantly, the cable robot doesn't so much know so much about landscaping, it simply is able to receive messages from the landscaper about where and when digging cable ditches would most harm the landscaper's ability to acheive it's own goals. Likewise, it can receive such messages from all the other robots on the planet.
The software upgrade is, essentially, altruism. A sense of right and wrong. This is not to imply that the robots are sentient (nor am I implying that the word "sentient" is even a meaningful term), but simply that they have goals that are prioritized, and that, in this prioritization, they can take into account the goals of the other robots. It isn't magic, it doesn't require God, consciousness, a "mind," or really anything special to explain it.