With computers and machinery always becoming more complex and more powerful, it is not far-fetched to assume that they may become self-aware and maybe sentient. This would ultimately require to grant them a status of lifeforms. But where do we have to draw a line? Where does Star Trek draw the line?
While the crew hunted down strange but supposedly "natural" lifeforms relentlessly more than once, such as in "Operation - Annihilate!", "Obsession" or "The Immunity Syndrome", there are other occasions on which sentient beings were saved in the spirit of "exploring strange new life", such as in "The Devil in the Dark" or "Metamorphosis". A machine, on the other hand, was at most conceded the attribute of being intelligent at the time of TOS. Although there was never a hint that androids existed in the Federation or in Starfleet, the impression is that anyone like Data would have been denied their rights of individuality in the 23rd century. Even more drastically, androids and machines routinely had to be destroyed or "discussed to death", like it happened with Landru in "Return of the Archons", the war simulation computer in "A Taste of Armageddon", the planet killer in "The Doomsday Machine", Nomad in "The Changeling", Vaal in "The Apple", the androids in "I, Mudd" or M-5 in "The Ultimate Computer". No one really felt sorry about the loss. On the contrary, their obvious intelligence was rated as dangerous for humanity. Much like with genetically enhanced humans, there was something intrinsically villainous about autonomous computers and androids. They were either programmed to be "evil" or they ran out of control because of accidents, giving themselves the goal to rule over biological lifeforms or to exterminate them like in TOS: "What are Little Girls Made of?" or "I, Mudd". And even the few advanced machines that functioned as intended had something eerie about them, such as M-4 from TOS: "Requiem for Methuselah" (which may have to do with the fact that the prop was a re-use of Nomad from "The Changeling").
Another example of an artificial lifeform that is shown in a positive light is V'ger from "Star Trek: The Motion Picture", even though V'ger destroys anything in its flight path (apparently not knowing what killing a lifeform actually means in an ethical sense). In any case V'ger's quest for its creator is one more characteristic that is typical of lifeforms, not of machines.
TNG and beyond
In light of Data's career it is odd why in TNG: "The Measure of a Man" Commander Maddox could simply demand Data to be disassembled because, as he claimed, Data was Starfleet's property. Essentially the same question showed up when Admiral Haftel demanded Lal's extradition with much the same justification in TNG: "The Offspring". On a related note, even if Data was not regarded as a person but as a thing, how could Starfleet claim ownership on him, when he legally belonged to Dr. Noonien Soong? It is only possible that some later jurisdiction may have been used to override Data's entry into Starfleet. In any case, it should have been decided much earlier than after years in Starfleet whether he was alive or not. And Data should have made sure that no one could claim ownership on Lal before building the android.
Furthermore, in some cases it may have severe consequences for human beings if they had to respect the rights of evolving artificial life, possibly at the expense of their security or even their own lives. The naite incident in TNG: "Evolution" almost turned into a disaster, and we may only speculate what would have happened if the Enterprise-D had chosen a less moderate way to procreate in TNG: "Emergence". Like already V'ger, the evolving artificial intelligences of TNG seem to have forgotten or to ignore the respect for biological lifeforms, although something like "ethical subroutines" must have belonged to their original programming.
Even though shown in an overall positive light this time (as "evolution" is unequivocally deemed a worthwhile process in Trek), there is still something incalculable about artificial life, like a remainder of the villainous machines of TOS. Even Data himself occasionally runs amuck, notably in "Brothers" and in "Insurrection". While we should not forget that human beings are overall much more "fault-prone", the severity of machine faults is much higher, especially when fail-safe mechanisms fail or are overridden. But seeing that no one would generally mistrust holodecks either, although these fail quite often (actually much too often to be still considered safe), the possible fear of androids and other artificial lifeforms should not be a reason for the Federation to impede their development.
Androids, computers and other intelligent machines were predominantly seen as pieces technology in the 1960s and not so much as possible sentient beings. If they exhibited characteristics of human beings in the time of TOS, it almost customarily endangered the human crew, which became a cliché of TOS. But their possibly villainous behavior was not condemned because the androids or other machines acted just within the boundaries of their programming. This changed with Data. Since the remarkable episode "The Measure of a Man" there is an ongoing trend to recognize the rights not only of androids but also of other artificial lifeforms such as holograms. While technology runs out of control just as frequently as in TOS, in the 24th century the question whether androids and machines can surpass their original programming is in the focus of interest.
Overall the 24th century seems to be somewhat more open-minded about artificial lifeforms. However, this may be due to the real-world development, in which today's computers are far more advanced than anyone could have imagined in the 1960s. T he question whether computers may become sentient has become a more interesting science fiction issue than it was at the time of TOS.