Skip to main content

Posts

Showing posts from 2024

Navigating the Data Lake: Deploying Hadoop/HDFS and Spark on Kubernetes

Introduction: In the modern era of data-driven decision-making, organizations are constantly seeking innovative solutions to harness the power of big data. One such solution is the deployment of Hadoop/HDFS and Spark on Kubernetes—a strategic endeavor that requires careful planning and execution. In this blog post, I will explore the process of deploying these technologies on Kubernetes, unlocking new possibilities for data management and analysis. This is the same solution that we used when we started to build our no code/low code data pipeline and visualization platform DataSetu. For Datasetu we needed to create a Data lake for processing huge dataset with high performance. This how we went about it, Chapter 1: Setting the Foundation with Kubernetes Before embarking on our journey to the data lake, it's essential to establish a solid foundation. Kubernetes serves as the cornerstone of our infrastructure, providing the orchestration and scalability needed to manage our distributed

How to use Generative AI on custom data sets across industry to build domain specific use cases

Generative AI is transforming the landscape of machine learning by enabling the creation of new content that can mimic human-like capabilities. One of the most powerful forms of generative AI is the Large Language Model ( LLM ), which can understand and generate human language with remarkable proficiency. A key feature of LLMs is their ability to be re-trained or fine-tuned on custom data sets. This allows organizations to tailor the model's responses to specific domains or use cases. For instance, a legal firm could train an LLM on legal documents to assist in drafting contracts, while a medical research company could train it on scientific papers to generate new research hypotheses. The process of re-training an LLM involves several steps. First, a suitable data set must be curated. This data set should be large enough to cover the desired scope and diverse enough to enable the model to learn various patterns and nuances. Next, the model must be fine-tuned, which involves adj