The new General Data Protection Regulation (GDPR) legislation that came into effect today (May 25, 2018) potentially impacts all Machine Learning (an advanced area of AI and Cognitive Computing).

The EU’s General Data Protection Regulation (GDPR), for example, includes a right to explanation clause.

However in a recent article in the MIT Technology Review, top AI scientists lament how they don’t really know why these algorithms work.

So when Google, Facebook or Bing, use your personal data to improve the searching and recommendation advice they provide, they cannot necessarily comply if they do not know how their Machine Learning algorithms have weighted their probabilistic models.

There has recently been an initiative from DARPA to try to address this, called ”Explainable AI (XAI) is artificial intelligence.” 

Although it is perhaps a little more critical that the United States Advanced Research Projects Agency understand their algorithms if weaponized, as this may lead to more than a loss of data, but a loss of life in outlier situations.

If they can crack this transparency it will play an important role with FAT ML model (fairness, accountability and transparency in machine learning).

Resources