Do you think your code is Perfect? Well, Think again.

Knoldus Blogs

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

Martin Fowler

“I can code.” I always say to myself.
But do others think the same? Is my code good enough for people to understand? Do other people think “Damn I wish I could write such code.?” –  that’s the main question I always had in mind.

Definition of Clean Code can vary from person to person. Clean code is subjective and every developer has a personal take on it. In my opinion, I found it to be the simplest definition :

Clean code is a code that is easy to understand and easy to change and has a same meaning for everyone.

In this blog, we will be covering some of the best practices that we should keep in mind for writing a good code. We will be taking reference…

View original post 1,594 more words


Is Shifting to Domain Driven Design worth your Efforts?

Knoldus Blogs

In our earlier blog, we explored a bit about Microservices. But let’s take a step back and look into how microservices can be effectively designed.
Yes, you guessed it right. We will be talking about the Domain Driven Design or what we call the DDD approach.

But before jumping into the concepts of Domain Driven Design, let’s understand 2 basic terminologies :

  • Domain:
    A domain is the sphere of knowledge and activity around which the application logic revolves.
  • Model :
    A system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain.

Now that you know what a domain and a model are, let’s try to understand what Domain Driven Design is.
Screenshot from 2018-04-22 09:00:09.pngDomain Driven Design is a methodology and process prescription for the development of complex systems whose focus is mapping activities, tasks, events, and data within a problem domain into…

View original post 1,075 more words

Kafka And Spark Streams: The happily ever after !!

Knoldus Blogs

Hi everyone, Today we are going to understand a bit about using the spark streaming to transform and transport data between Kafka topics.

The demand for stream processing is increasing every day. The reason is that often, processing big volumes of data is not enough. We need real-time processing of data especially when we need to handle continuously increasing volumes of data and also need to process it and maintain it as well.

Image result for let the data flow meme'

Recently, in one of our projects we faced such a requirement. Myself, being a newbie to Apache Spark, had only a little idea about what to do. So, I considered the best option to be the Apache Spark documentation. It did help me understand the basic concepts of Spark, about streaming and how to transport data using streams.

To give you a heads up, Spark Streaming is an extension of the core Spark API that enables scalable…

View original post 603 more words

“Why do you always choose Microservices over me?” said the Monolithic architecture

Knoldus Blogs

Ever wondered why do companies like Apple, eBay and Netflix care so much about microservices? What makes this simple architecture so special that it is being hyped so much? Is it worth the pain and efforts to shift an entire running application from monolithic to microservices architecture? Many such questions came to our minds when we started using the microservices in our projects.
In this blog, we will try to cover the answers to these questions and have a deeper look into the microservices architecture and compare it with the monolithic architecture.

 What Are Microservices and how are they different from Monolithic?


Microservices are small, autonomous services that work together. Let’s simplify this definition a little bit more.
Microservices – also known as the microservice architecture – is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture…

View original post 1,435 more words

The curious case of Cassandra Reads

Knoldus Blogs

In our previous blog, we discovered how Cassandra handles its write queries. Now it’s time to understand how it ensures all the read requests are fulfilled. Let’s first have an overall view of Cassandra. Apache Cassandra is a free and open-source distributed NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.

Now Let’s jump to how Cassandra handles the read queries.

Reading in Cassandra 

In Cassandra, it is easy to read data because clients can connect to any node in the cluster to perform reads, without having to know whether a particular node acts as a replica for that data.
If a client connects to a node that doesn’t have the data it’s trying to read, the node it’s connected to will act as coordinator node to read the data from a node that…

View original post 917 more words

Cassandra Writes: A Mystery?

Knoldus Blogs

indexApache Cassandra is a free and open-source distributed NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.

It is a peer to peer database where each node in the cluster constantly communicates with each other to share and receive information (node status, data ranges and so on). There is no concept of master or slave in a Cassandra cluster.Any Node can be coordinator node for each query.

In this blog, we’ll take a look behind the scenes to see how Cassandra handles write queries. For Cassandra Basics and installation, you can refer to our earlier blog.

Writing in Cassandra

When a client performs a write operation against a Cassandra database, it processes data at several stages on the write path, starting with the immediate logging of a write and ending in with a write of…

View original post 911 more words

Reactors.IO: Actors Done Right

Reactors.IO: Actors Done Right

Knoldus Blogs

In our previous blog, we tried to explore the upcoming version of i.e Java 9. So this time we try to focus on Scala . In This Blog , We will be Looking onto a New Reactive programming framework for Scala Applications i.e Reactors IO . fuses the Best parts of Functional reactive Programming and the Actor Model.
allows you to create concurrent and distributed applications more easily, by providing correct, robust and composable programming abstractions.Primarily targeting on JVM , the Reactors framework has bindings for both Scala and Java.

Setting Up

To get started with Reactors.IO, you should grab the latest snapshot version distributed on Maven. If you are using SBT, add the following to your project definition :


Then Simply Import the io.reactors package: import io.reactors._  and you are ready to go.

View original post 1,077 more words