Skip to main content

Posts

Navigating the Data Lake: Deploying Hadoop/HDFS and Spark on Kubernetes

Introduction: In the modern era of data-driven decision-making, organizations are constantly seeking innovative solutions to harness the power of big data. One such solution is the deployment of Hadoop/HDFS and Spark on Kubernetes—a strategic endeavor that requires careful planning and execution. In this blog post, I will explore the process of deploying these technologies on Kubernetes, unlocking new possibilities for data management and analysis. This is the same solution that we used when we started to build our no code/low code data pipeline and visualization platform DataSetu. For Datasetu we needed to create a Data lake for processing huge dataset with high performance. This how we went about it, Chapter 1: Setting the Foundation with Kubernetes Before embarking on our journey to the data lake, it's essential to establish a solid foundation. Kubernetes serves as the cornerstone of our infrastructure, providing the orchestration and scalability needed to manage our distributed
Recent posts

How to use Generative AI on custom data sets across industry to build domain specific use cases

Generative AI is transforming the landscape of machine learning by enabling the creation of new content that can mimic human-like capabilities. One of the most powerful forms of generative AI is the Large Language Model ( LLM ), which can understand and generate human language with remarkable proficiency. A key feature of LLMs is their ability to be re-trained or fine-tuned on custom data sets. This allows organizations to tailor the model's responses to specific domains or use cases. For instance, a legal firm could train an LLM on legal documents to assist in drafting contracts, while a medical research company could train it on scientific papers to generate new research hypotheses. The process of re-training an LLM involves several steps. First, a suitable data set must be curated. This data set should be large enough to cover the desired scope and diverse enough to enable the model to learn various patterns and nuances. Next, the model must be fine-tuned, which involves adj

What is big data ? and what role Hadoop has to play?

In recent times we have been hearing , reading (in many blogs) quite often about cloud computing ,big data and HPC ie. high performance computing. Undoubtedly, these are the buzz words for this decade. If you have been in the industry for quite some time then you can recall last decade was web 2.0 decade. So what is big data? Is it a new framework or new data modelling technique or part of NO SQL movement? I have seen many times people relate big data to one of these or they are confused about it. Same is with HADOOP, it is thought as a NO SQL database framework. Big data is nothing but just huge amount of data which requires mining. Per wiki , Big Data is a term applied to data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target currently ranging from a few dozen terabytes to many petabytes of data in a single data set.So big data is a hovering p

class.forName(String className) used in JDBC unfolded

During many interviews I have taken , I have observed even seasoned programmers find it difficult to explain the use of class.forname() method while making a JDBC connection object. In this post, I am just trying to explain this, Here is a simple JDBC code, import java.sql.*; public class JdbcConCode { public static void main(String args[]) { Connection con = null; String url = "jdbc:mysql://localhost:3306/nilesh"; String driver = "com.mysql.jdbc.Driver"; String user = "root"; String pass = "nilesh"; try { Class.forName(driver); con = DriverManager.getConnection(url, user, pass); System.out.println("Connection is created...."); //TODO //rest of the code goes here } catch (Exception e) { System.out.println(e); } } } Let's go through the code above, the url indicates the location of the database schema, here the database is nilesh . Then there is Driver which is the fully qualified JDBC driver class name.

Exporting/Saving Putty/SecureCRT/SecureFX sessions on Windows!!

This is my first stint with blogging and I thought why not just start with sharing some useful information/tricks  that can save you some time. To begin with , here are couple of tips that could save you some time. Actually, I had once this problem where I have to take backup and move my work data to a new laptop and as we normally do , I also put my data on some external drive and copied it to the new box. During all this exercise I realized that by this I can only back up the data, what about the work I am doing..my configurations and installations...? What the heck...I need to spend lot of time to redo all that...crap. I was using Putty, SecureCRT and SecureFX to log on to client side boxes and to work upon. And all three had at least 30 saved sessions having IP/user/pwd information, reconfiguring those sessions would be a daunting tasks. I did some research "well by that I meant googling also..:)" and here is how a little information can help you.. If you are usin