Data lakes tend to get data into them by dumping data indiscriminately from various data sources. Although in some contexts you may want a kitchen sink approach to a data lake, in most cases you will want to go back and start mining the data lake for certain patterns. If you did a lake is ingesting content and this will include structured and unstructured data or content from multiple data sources you will want to cure the data in Richard with meta-data and Taggett so that when later you come to the data lake and would like to process the data you have a chance to use the machine learning algorithms to learn from the data you have accumulated using supervise learning.
A Filtered Data Lake