Testing Code of Ethics

Testing Code of Ethics

Artificial Intelligence looks the solution to many problems. Chatbots for example acts like real people with the help of Artificial Intelligence. At this moment the chatbots are already very advanced. You can not tell anymore if you are chatting with a bot or a real person. There can raise some problems though. Some dating sites are using chatbots. The bots try to convince the mostly male people to have a paying subscription on the site. Many people believe that they are talking to a real person. In reality it is notthing but software. This looks a little scaring and not very ethical.

Facebook has done in the past some experiments with their users. The researchers wanted to see if a user is happier if more positive news is passing by on their newsfeed. The same applies for negative news. Is seeing more negative news making people unhappy? Facebook manipulated the newsfeed by filtering news with a positive sentiment. The result was as expected. People post more negative messages when they where seeing more negative news. Nobody of the users knew about the experiment. Is that ethical?

There was another experiment. Researchers wanted to know if they can have more people to go voting for the next election. So they created an add just telling people to go voting. It seems a noble thing because more people that go voting is good for democracy. But it can also be seen as manipulation. Because the people that do not go maybe vote all for a certain candidate. And when they did not go to vote, the other candidate wins. This experiment, is that ethical?

Facebook has registered a patent for a new kind of credit rating. Bank and insurance companies use credit rating to determine if a user is capable to pay the loans back. It is unlikely that you get a loan if your score low on that scale. It is also possible that you get one. You have to pay more interest in that case. So it is better to score high on a scale like that.

Facebook as also developed such a rating. Your social network determines the rating. It sounds interesting. Is it ? Suppose you are moving to Africa to help the local people there. After a few years you go back to your home town. You decide to buy a new house and go to the local bank. Because you do not have any income yet, you score low on their credit rating scale and get no loan. With the rating of Facebook, the principle is different. The local bank now can see if your friends have nice jobs. Then you get a higher rating. Is this positive news? Or is it not?

Suppose now you are a hard working person who does not earn that much money. You have many unemployed friends on social media. Then that same algorithm calculates that you are no good and gives you a low credit rating. In this system you are a risk.

What would you do if you have to test such a system? Do you report a bug if you find a problem with the system that is not a functional problem but an ethical problem? Should ethical testing be part of the non functional requirements? We should test these requirements too. Systems that use artificial Intelligence and big data are still very young. There are already many examples that are not ethical at all.

We should create a Testing Code of Ethics. The following rules are in my Code of Ethics:

  • The system should not harm the users or society. Both physical and mental harm is not allowed.
  • The public good is a central concern.
  • The system is not allowed to discriminate in any way.

Are there other ethical rules that you want to test for? I’d love to hear from you.