Skip to main content

Introducing Flet - The Python Framework That Takes Inspiration from Flutter

· 4 min read
Alex Han
Software Engineer

image

At a time when mobile app development has become the norm, it's not surprising to see an increasing number of frameworks that make the process more efficient. One such framework is Flet, a Python-based framework that enables developers to build native iOS and Android apps using a single codebase. In this article, i will delve into the world of Flet and explore its origins, characteristics, and usage. I will also compare Flet with another popular framework, Flutter, to understand how they differ and where Flet shines.

What is Flet and why was it made first?

Flet was first introduced in 2019 by a group of Japanese developers looking to create a Python framework that took inspiration from Flutter's unique approach to building user interfaces. The framework quickly gained popularity within the Python community, and today, it boasts an active community of contributors and users.

The primary goal of Flet is to simplify the development process by enabling developers to write code once and deploy it on both iOS and Android platforms. Flet is built on top of popular Python frameworks such as Flask and Kivy, making it easy to integrate with other Python libraries.

The characteristics of Flet

At its core, Flet is designed to be simple, intuitive, and flexible. Here are some of the key characteristics of the framework:

  • Declarative Syntax: Like Flutter, Flet uses a declarative syntax that allows developers to describe the user interface in a simple and concise manner.
  • Hot Reload: Flet's hot reload feature allows developers to make changes to the code and see the results in real-time, without the need to restart the application.
  • Widgets: Flet uses a widget-based system, similar to Flutter, to build the user interface. Widgets are reusable building blocks that can be combined to create complex UI elements.
  • Material Design: Flet is built on top of Google's Material Design, making it easy for developers to create beautiful and consistent user interfaces.
  • Pythonic: Flet follows Pythonic principles, making it easy for Python developers to learn and use the framework.

Comparing Flet with Flutter

image2

  1. Language

The primary difference between Flet and Flutter is the programming language used. Flutter uses the Dart programming language, while Flet uses Python. For Python developers, Flet provides a more accessible option.

  1. Syntax

Flet's syntax is more intuitive and straightforward compared to Flutter. Python developers will find Flet's syntax more familiar, making it easier to learn and use.

  1. Customizability

Both Flet and Flutter are highly customizable, but Flet provides more flexibility due to Python's dynamic nature. Python developers can leverage their existing skills to create more complex and customized apps.

Examples of how to use Flet

Now that I've explored some of the characteristics of Flet, let's take a look at example of how to use the framework. I'll compare Flet to Flutter to highlight the similarities and differences between the two frameworks.

from flet import Text, TextField, Button

class LoginScreen:
def build(self):
return Column(
children=[
Text('Login'),
TextField('Username'),
TextField('Password'),
Button('Login')
]
)

And here's the equivalent code in Flutter

import 'package:flutter/material.dart';

class LoginScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Column(
children: [
Text('Login'),
TextField(
decoration: InputDecoration(hintText: 'Username'),
),
TextField(
decoration: InputDecoration(hintText: 'Password'),
),
ElevatedButton(
onPressed: () {},
child: Text('Login'),
),
],
);
}
}

As you can see, the code in Flet is very similar to the code in Flutter. Both frameworks use widgets to build the user interface, and the code is easy to read and understand.

Flutter can also be used to build a app, but developers may need to spend more time learning the framework and the Dart language before they can start building.

Conclusion

if you are a Python developer and want to start building mobile applications, flet is an excellent choice. It provides a lot of useful features and widgets, as well as comprehensive documentation, making it easy for beginners to get started quickly. Additionally, flet's similarity to flutter makes it easy to switch between the two libraries, allowing developers to choose the one that best suits their needs.

Introducing Multy A Comprehensive Comparison with Terraform

· 5 min read
Alex Han
Software Engineer

image

At the core of infrastructure as code (IaC) lies the concept of automating infrastructure management through code. Terraform, an open-source IaC tool, has gained immense popularity over the years due to its versatile capabilities, large provider ecosystem, and robust community support. But with the growing need for more efficient and flexible IaC tools, a new player has emerged in the field: Multy.

In this article, I will introduce Multy, discuss its features and advantages, and provide a comprehensive comparison with Terraform to help you decide which tool is best suited for your infrastructure management needs.

What is Multy and Why was it Created?

Multy is an open-source IaC tool that was created to provide a more flexible and efficient way of managing cloud infrastructure. It was designed to address some of the limitations of other popular IaC tools, such as Terraform. One of the main features of Multy is its ability to handle multiple cloud providers, such as AWS, Azure, and GCP, simultaneously. This means that users can manage cloud infrastructure across multiple providers using a single tool, which is a significant advantage over other tools that are limited to a single cloud provider.

Features of Multy

image2

Multy has several unique features that make it a powerful IaC tool. Here are some of its key features:

  1. Multi-Cloud Support - Multy can handle infrastructure across multiple cloud providers, which is a significant advantage over other tools that are limited to a single cloud provider.
  2. Plugin System - Multy has a plugin system that allows users to extend its functionality and customize it to meet their specific needs.
  3. Modular Design - Multy has a modular design, which makes it easy to manage infrastructure across multiple environments.
  4. Powerful Syntax - Multy's syntax is powerful and easy to use, which makes it easy to write and manage complex infrastructure code.
  5. Terraform Compatibility - Multy is compatible with Terraform, which means that users can use existing Terraform modules with Multy.
  6. Native Kubernetes Support - Multy has native support for Kubernetes, making it easier for users to manage their Kubernetes clusters and associated resources.
  7. Secure Remote State Management - Multy provides secure remote state management out of the box, ensuring that your infrastructure code is always up to date and secure.

Pros and Cons of Multy Compared to Terraform

Multy and Terraform are both powerful IaC tools, and each has its advantages and disadvantages. Here are some of the pros and cons of Multy compared to Terraform:

Pros of Multy

  1. Multi-Cloud Support - Multy can handle infrastructure across multiple cloud providers, which is a significant advantage over Terraform.
  2. Plugin System - Multy has a plugin system that allows users to extend its functionality and customize it to meet their specific needs.
  3. Modular Design - Multy has a modular design, which makes it easy to manage infrastructure across multiple environments.
  4. Powerful Syntax - Multy's syntax is powerful and easy to use, which makes it easy to write and manage complex infrastructure code.

Cons of Multy

  1. Limited Community Support - Multy is a relatively new tool, which means that it has a smaller community than Terraform, which can make it harder to find resources and support.
  2. Learning Curve - Multy's syntax and features can be challenging to learn, especially for beginners.
  3. Terraform Compatibility - While Multy is compatible with Terraform, there may be some differences in syntax and functionality between the two tools.

How to Use Multy

Using Multy is straightforward. Here's a step-by-step guide to get started with Multy:

  1. Install Multy: The first step is to install Multy on your local machine or server. You can find detailed instructions on how to install Multy in the official documentation.
  2. Create Your Multy Project: Once you have installed Multy, you can create your Multy project. A Multy project is a set of infrastructure resources that you want to manage using Multy. You can create a new project by running the following command:
  3. multy init
  4. Define Your Infrastructure Resources: After creating your Multy project, you need to define your infrastructure resources using the Multy DSL. Multy DSL is a simple language that enables you to define your infrastructure resources in a cloud-agnostic way.
  5. Apply Your Infrastructure Changes: Once you have defined your infrastructure resources, you can apply your changes to your infrastructure using the following command:
  6. multy apply

Conclusion

Multy is a promising alternative to Terraform that offers several unique features, including multi-cloud provider support, native Kubernetes support, and secure remote state management. While it has some limitations, it's worth exploring if you are looking for a cloud-agnostic IaC tool that can help you manage your infrastructure across multiple cloud providers.

Introduction to Terraform Simplify Your Infrastructure Management

· 5 min read
Alex Han
Software Engineer

image

As the world moves towards the cloud, managing infrastructure has become increasingly complex. Whether you're working with AWS, GCP, or Azure, the sheer number of services available can be overwhelming. Infrastructure management is no longer just about keeping the lights on; it's about keeping your applications running smoothly while keeping costs under control. Enter Terraform - an open-source infrastructure-as-code tool that simplifies infrastructure management.

What is Terraform?

Terraform is a tool that allows you to define your infrastructure as code. This means that instead of configuring your infrastructure manually, you can write code that describes the desired state of your infrastructure. Terraform then takes care of creating, updating, or deleting resources to make sure that your infrastructure matches the code you've written.

Why do you need Terraform?

image2

Terraform offers many benefits over manual infrastructure management. Firstly, it simplifies the process of defining your infrastructure. Instead of manually creating and configuring resources, you can use code to define your infrastructure. This makes it easier to create repeatable, predictable infrastructure that can be easily tested and modified.

Secondly, Terraform allows you to manage your infrastructure in a modular way. Instead of having to manage a monolithic infrastructure, you can break it down into smaller, more manageable components. This makes it easier to understand and modify your infrastructure.

Finally, Terraform makes it easier to manage infrastructure at scale. With Terraform, you can define your infrastructure in a way that makes it easy to replicate across multiple environments. This makes it easier to manage infrastructure across different regions, data centers, or cloud providers.

How to Install Terraform

Terraform can be installed on Windows, Mac, and Linux. The installation process is straightforward and can be completed in a few simple steps. To install Terraform, follow the instructions provided in the official Terraform documentation for your specific operating system.

Linux Installation

For Linux users, the easiest way to install Terraform is by using a package manager such as apt or yum. Here is an example of how to install Terraform on Ubuntu using apt.

sudo apt-get update
sudo apt-get install terraform

MacOS Installation

On MacOS, you can install Terraform using the popular package manager Homebrew

brew install terraform

Windows Installation

For Windows users, Terraform can be installed by downloading the appropriate executable file from the official Terraform website. Once downloaded, unzip the file and add the binary to your system's path.

Connecting to AWS, GCP, and Azure

Once Terraform is installed, you'll need to configure it to work with your cloud provider. This involves setting up credentials and configuring the provider settings. Again, the official Terraform documentation provides detailed instructions for each cloud provider.

AWS Connection

To connect to AWS, you need to have an AWS account and create an access key and secret key. Once you have those, add the following code to your Terraform configuration file:

provider "aws" {
region = "us-west-2"
access_key = "YOUR_ACCESS_KEY"
secret_key = "YOUR_SECRET_KEY"
}

Make sure to replace YOUR_ACCESS_KEY and YOUR_SECRET_KEY with your actual access key and secret key.

GCP Connection

To connect to GCP, you need to have a GCP account and create a service account key. Once you have that, add the following code to your Terraform configuration file:

provider "google" {
credentials = file("path/to/your/credentials.json")
project = "your-project-id"
region = "us-west1"
}

Make sure to replace path/to/your/credentials.json and your-project-id with the path to your credentials file and your actual GCP project ID.

Azure Connection

To connect to Azure, you need to have an Azure account and create a service principal. Once you have that, add the following code to your Terraform configuration file:

provider "azurerm" {
subscription_id = "YOUR_SUBSCRIPTION_ID"
client_id = "YOUR_CLIENT_ID"
client_secret = "YOUR_CLIENT_SECRET"
tenant_id = "YOUR_TENANT_ID"
}

Make sure to replace YOUR_SUBSCRIPTION_ID, YOUR_CLIENT_ID, YOUR_CLIENT_SECRET, and YOUR_TENANT_ID with your actual Azure subscription ID, client ID, client secret, and tenant ID.

Managing Infrastructure with Terraform

Now that you have Terraform installed and connected to your cloud provider, you can start managing infrastructure as code. Here are some examples of how to add, update, change, or remove real infrastructure using Terraform.

Adding, Updating, Changing, or Removing Real Infrastructure with Terraform

Let's look at how you can use Terraform to manage your infrastructure. In this example, I'll use AWS as our cloud provider.

  1. Define your infrastructure The first step is to define the infrastructure you want to create. In this example, I'll create an EC2 instance and an S3 bucket.
provider "aws" {
region = "us-west-2"
}

resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}

resource "aws_s3_bucket" "example" {
bucket = "example-bucket"
}
  1. Initialize Terraform Next, I need to initialize Terraform by running the terraform init command. This will download the necessary providers and modules.

  2. Plan the changes I can now use the terraform plan command to preview the changes that Terraform will make to our infrastructure.

  3. Apply the changes Finally, I can apply the changes by running the terraform apply command. This will apply the Terraform configuration and creates the resources defined in the configuration.

  4. Update the Configuration If you want to change the configuration, you can update the Terraform configuration file and run terraform apply again. Terraform will automatically detect the changes and update the resources accordingly.

  5. Remove Resources If you want to remove the resources defined in the configuration, you can run terraform destroy. This will remove all resources defined in the Terraform configuration.

Conclusion

Infrastructure as code has become an essential part of modern software development, and Terraform is one of the most popular tools for managing infrastructure as code.

With Terraform, you can manage infrastructure across multiple cloud providers, define infrastructure in a declarative language, and version control your infrastructure. If you're not already using Terraform, I highly recommend that you give it a try.

In the next blog post, I will introduce Multy, a powerful tool for managing multiple Terraform workspaces. Stay tuned!

Memphis A Comprehensive Overview and Comparison with RabbitMQ and Kafka

· 5 min read
Alex Han
Software Engineer

image

Memphis is an open-source message queuing and streaming platform that has gained a lot of traction in recent years due to its powerful features and ease of use. It was created to solve the problems of scaling, reliability, and performance in large-scale distributed systems. In this article, i will take a deep dive into Memphis and compare it with RabbitMQ and Kafka, which are similar technology stacks. I will also show you an example of how to use Memphis in detail and discuss the limitations that still exist.

Overview of Memphis

Memphis is a highly performant messaging system that is designed to be easy to use and highly scalable. It is built on top of the Rust programming language and leverages the power of modern hardware to deliver exceptional performance. Memphis is designed to be highly fault-tolerant and can handle the loss of nodes in a cluster without any loss of data or service interruption. It uses a peer-to-peer architecture that allows for seamless horizontal scaling.

Characteristics of Memphis

image2 One of the biggest strengths of Memphis is its ease of use. It comes with a simple and intuitive API that makes it easy to get started with. It also has an excellent documentation that makes it easy to learn and troubleshoot. Memphis is highly reliable and can deliver messages with low latency, making it ideal for use cases that require real-time data processing.

When compared to RabbitMQ and Kafka, Memphis is faster and can deliver messages with lower latency. It is also easier to use and has a lower learning curve than both RabbitMQ and Kafka. Memphis also supports a wide range of protocols, including HTTP, MQTT, and AMQP, which makes it highly versatile.

Comparison with RabbitMQ

RabbitMQ is a popular message broker that is widely used in enterprise environments. It is built on top of the Erlang programming language and is designed to be highly reliable and scalable. RabbitMQ is known for its advanced features, such as message routing, clustering, and federation.

When compared to Memphis, RabbitMQ is more complex and has a steeper learning curve. It is also slower than Memphis and has higher latency when delivering messages. RabbitMQ has more advanced features than Memphis, but these features come at the cost of increased complexity.

Comparison with Kafka

Kafka is a distributed streaming platform that is widely used for building real-time data pipelines and streaming applications. It is built on top of the Java programming language and is designed to be highly scalable and fault-tolerant. Kafka is known for its high throughput and low latency, making it ideal for use cases that require real-time data processing.

When compared to Memphis, Kafka is more complex and has a steeper learning curve. It is also slower than Memphis and has higher latency when delivering messages. Kafka has more advanced features than Memphis, such as the ability to store and process streams of data over extended periods of time, but these features come at the cost of increased complexity.

Example of using Memphis

How to use Memphis, let's look at an example with code. Suppose i have two applications that need to communicate asynchronously. The first application produces messages, while the second application consumes messages. To use Memphis for this, i first need to set up a Memphis broker and create a queue for the messages. Here's how to do it.

const { memphis } = require('memphis');

const broker = new memphis.Broker();

const queue = broker.createQueue('example_queue');

// Producer application code
const producer = broker.createProducer('example_queue');
producer.send({ message: 'Hello, World!' });

// Consumer application code
const consumer = broker.createConsumer('example_queue');
consumer.on('message', (msg) => {
console.log(`Received message: ${msg.content.toString()}`);
});

This code creates a Memphis broker, creates a queue called 'example_queue', and sets up a producer and consumer for the queue. The producer sends a message to the queue, while the consumer listens for messages and prints them to the console. This is just a simple example, but it demonstrates how easy it is to use Memphis for asynchronous communication between applications.

Another example of how to use Memphis to send message from a producer to a consumer.

use memphis::client::MemphisClient;

let client = MemphisClient::new("localhost:5555").unwrap();

let message = "Hello, world!";
let topic = "my-topic";

client.send(topic, message).unwrap();

This example creates a Memphis client and sends a message to the "my-topic" topic.

Limitations of Memphis

While Memphis is an excellent messaging system, there are still some limitations that exist.

One of the main limitations is its lack of support for some messaging patterns such as publish-subscribe and topic-based messaging. This can make it difficult to implement certain types of applications that require these patterns.

Another limitation is its lack of support for dynamic routing, which can make it challenging to route messages to specific destinations dynamically. Memphis also has a smaller community.

Conclusion

Memphis is a reliable and high-performance messaging system that is easy to use and has several strengths when compared to other similar messaging systems such as RabbitMQ and Kafka. Although it does have some limitations, they are constantly working to improve Memphis and make it more versatile and robust.

It is still a beta version and has many limitations, but I think it has many strengths and features a new technology stack that is easy to use. If you try it once, I recommend you to try it because you will be able to apply it quickly when the official version is relesed someday.

Exploring Thin Backend

· 9 min read
Alex Han
Software Engineer

image

Introduction

If you're familiar with web development, you've probably heard of the term "backend". It's the part of the website that the user doesn't see, where all the code and logic that drives the website is stored. Recently, there has been a lot of buzz about "thin backend" - a new approach to building backend systems. In this blog post, we'll explore what a thin backend is, why it was created, what its characteristics and strengths are, and how to use it, even if you're a beginner.

What is a Thin Backend?

A thin backend is a new approach to building backend systems that is designed to be lightweight and highly efficient. It was created in response to the growing demand for more scalable and flexible backend systems that can handle the complex needs of modern web applications. A thin backend typically consists of a small set of core services that are highly optimized for speed and performance, and can be easily combined with other services to create a complete backend system.

Traditionally, backend systems have been built using monolithic architectures, which means that all the code and logic is combined into a single, large application. While this approach can work well for smaller projects, it can become unwieldy and difficult to maintain as the project grows in size and complexity.

A thin backend, on the other hand, is designed to be highly modular, with each service focused on a specific task or function. This makes it much easier to add, remove, or modify services as needed, without affecting the rest of the system.

Characteristics and Strengths of Thin Backend

image2

One of the biggest advantages of a thin backend is its simplicity. Because it's designed to be lightweight and highly efficient, it's much easier to build, maintain, and scale than traditional backend systems. Thin backends are also highly modular, which means that you can easily add or remove services as needed, without affecting the rest of the system. This makes it much easier to adapt to changing requirements and to scale the system as traffic increases.

Another strength of a thin backend is its performance. Because it's highly optimized for speed and efficiency, it can handle large amounts of traffic with ease, without sacrificing performance. Additionally, because it's so lightweight, it can run on smaller, less expensive servers, which can save you a lot of money in hosting costs.

Thin backends are also highly flexible, which means that you can easily integrate them with other services and systems. This makes it much easier to build complex applications that require multiple backend systems to work together.

Finally, because thin backends are built using open-source frameworks, they are highly customizable. This means that you can modify the code to better suit your specific needs, or even create your own custom services to add to the system.

Advantages of Thin Backend

image3

Apart from the main strengths of thin backend mentioned above, there are several other advantages to using this approach. One of the biggest is that thin backends are highly scalable. Because they are designed to be modular and highly efficient, they can handle large amounts of traffic with ease, and can be easily scaled up or down as needed.

Another advantage of thin backends is that they are highly resilient. Because each service is focused on a specific task or function, the system as a whole is much more resilient to failure. If one service fails, it doesn't necessarily mean that the entire system will fail, which is a huge advantage when it comes to maintaining uptime and ensuring that your application is always available.

Thin backends are also highly secure. Because they are built using open-source frameworks, they are constantly being updated and improved to address any security vulnerabilities. This means that you can be confident that your backend system is secure and that your data is safe.

How to Use Thin Backend If you're interested in using a thin backend, the good news is that it's relatively easy to get started, even if you're a beginner. There are a number of open-source thin backend frameworks available, such as Express.js and Flask, that provide a simple and intuitive way to build backend systems. These frameworks typically provide a set of core services, such as routing and middleware, that you can use to build your own backend system.

To start building your own thin backend, you'll need to have some basic knowledge of programming languages such as JavaScript or Python. You'll also need to be familiar with concepts such as APIs and HTTP requests. Once you have these basics down, you can start building your own backend system using one of the available frameworks.

Simple usage This is a quick start that appears when you sign up for a thin backend, it's very simple.

  1. checkout project template
npx degit digitallyinduced/thin-typescript-react-starter thin-project
  1. Switch Directory & Install Dependencies & Add Typescript Definitions
cd thin-project && npm install && npm install --save-dev "https://thin.dev/types/6cca19dc-dc50-48ad-aad4-c5286134d9d0/cappulbkdeIHkbOOTUJoxqAakuLZBrzw"
  1. Configure the Backend Url in .env
echo 'BACKEND_URL=https://test-161.thinbackend.app' > .env
  1. Start Server
npm run dev
  1. Now You can see your app! image4

  2. By default, you can check the database where the user table is created, and you can even register as a member by clicking the sign up button above, and you can also check the users added to the database in real time. Login, signup, and logout have already been developed. image5

  3. The project structure is simple. Through thin-backend-react library, put ThinBackend as the top tag and put the tags to be rendered as children. At this time, if requireLogin attribute is set to true in the ThinBackend tag, you need to login using the login format provided by ThinBackend to see the tags defined as children in order to view the tags set as children.

And through the useCurrentUser function provided by the thin-backend-react library, the current user information can be retrieved and used. Logout is also provided by the thin-backend library. In this way, the login, logout, register function, which was cumbersome when starting a project, can be implemented very simply. Now you just need to implement the contents of the actual web site right away! image6

Common Use Cases for Thin Backend

There are several common use cases for thin backend, which make it a popular choice for developers. One example is for building microservices. Microservices are small, independent services that work together to form a larger application. Because each service is focused on a specific task or function, a thin backend is the perfect architecture for building microservices.

Another common use case for thin backend is for building APIs. APIs are a way for different applications to communicate with each other. Because a thin backend is highly modular and flexible, it's a great choice for building APIs that can work with a wide variety of different applications.

Thin backend is also a popular choice for building real-time applications, such as chat applications or real-time gaming applications. Because it's highly optimized for speed and performance, it can handle the large amounts of data that are typically involved in real-time applications.

Thin Backend Has Some Limitations

image7 One of the main limitations is that it may not be suitable for larger, more complex websites. Thin Backend is designed to be lightweight and efficient, which can make it an excellent choice for small to medium-sized websites. However, larger websites may require more complex functionality and may benefit from a traditional backend with a database.

Another limitation of Thin Backend is that it may require more development time and expertise than other similar technologies. Because Thin Backend does not use a database, creating custom modules and integrating them into a website may require more coding expertise than other similar technologies. Additionally, because Thin Backend is a relatively new technology, there may be fewer resources and support available compared to more established technologies.

Another limitation of Thin Backend is that it may not be compatible with all hosting environments. Thin Backend typically requires a web server with certain configurations to function properly, which may limit the choice of hosting providers for a website built with Thin Backend.

Conclusion

In conclusion, despite these limitations, Thin Backend remains an excellent choice for many web development projects, particularly those that require a lightweight, fast-loading website with a high degree of customization and security.

a thin backend is a new approach to building backend systems that is lightweight, modular, and highly efficient. It's designed to be easy to build, maintain, and scale, and can handle large amounts of traffic with ease. If you're interested in building your own thin backend, there are a number of open-source frameworks available that provide a simple and intuitive way to get started. So why not give it a try and see for yourself what the best strengths of the thin backend can do for your web application?

As more and more web applications are developed, the need for scalable backend systems that can handle large amounts of traffic becomes increasingly important. By adopting a thin backend approach, you can build a system that is efficient, easy to maintain, and highly adaptable to changing requirements. So if you're looking for a new approach to building backend systems, why not give a thin backend a try? You may be surprised at just how effective it can be.

Overall, thin backend is a great choice for developers looking for a lightweight, modular, and scalable backend system that can handle large amounts of traffic with ease. Its simplicity, flexibility, and performance make it a popular choice for many developers, and its open-source nature means that it's constantly being improved and updated to address any issues or vulnerabilities. So if you're looking to build a new backend system, consider giving a thin backend a try - you may be surprised at just how effective it can be.

Dragonfly The In-Memory Data Store You Need to Know About

· 6 min read
Alex Han
Software Engineer

image Are you tired of slow and inefficient data storage solutions? Look no further than Dragonfly, the in-memory data store that is taking the tech world by storm. With lightning-fast speed and compatibility with existing Redis and Memcached APIs, Dragonfly is the perfect solution for anyone looking to improve their data storage capabilities.

Features Dragonfly

Upon entering Dragonfly's homepage, you'll be greeted with a plethora of impressive features. First and foremost, Dragonfly boasts lightning-fast speed that is unmatched by other data storage solutions. Additionally, it offers high availability and data durability, ensuring that your data is always safe and accessible.

But that's not all. Dragonfly also offers support for multiple data types, including strings, hashes, and sets. It also supports advanced features like transactions and Lua scripting, giving you even more control over your data.

The Benefits of Using Lua Scripting with Dragonfly

One of the most powerful features of Dragonfly is its support for Lua scripting. With this functionality, you can execute complex operations on your data directly within the data store, without the need to move it to an external system.

This provides a number of significant benefits. First and foremost, it allows for greater efficiency and speed. By performing operations within the data store itself, you can reduce network overhead and minimize latency.

Additionally, Lua scripting enables greater flexibility in how you manage your data. You can easily write custom scripts that perform specific functions or automate certain tasks, all while taking advantage of Dragonfly's lightning-fast performance.

Overall, if you're looking to maximize the speed and efficiency of your data storage solution while maintaining flexibility and control over your data management processes, Dragonfly with Lua scripting is an excellent choice.

Dragonfly vs. Redis and Memcached

image2 So how does Dragonfly stack up against other popular data storage solutions like Redis and Memcached? In terms of speed and performance, Dragonfly is the clear winner. Its in-memory architecture allows for lightning-fast data access and retrieval, while Redis and Memcached rely on disk-based storage that can slow things down.

However, Dragonfly does have a few drawbacks. It can be more complex to set up and configure than Redis and Memcached, and it may not be the best choice for smaller-scale projects.

Setting up and Configuring Dragonfly for Optimal Performance

image3 While Dragonfly's speed and performance are impressive, it can be more complex to set up and configure than Redis and Memcached. However, with a little bit of effort, you can ensure that your Dragonfly instance is optimized for maximum efficiency.

The first step in setting up Dragonfly is to choose the right hardware. Since Dragonfly relies on in-memory storage, you'll need a machine with plenty of RAM. The more RAM you have available, the larger your data sets can be and the faster your queries will run.

Once you have your hardware in place, it's time to install and configure Dragonfly itself. Fortunately, there are many resources available online that can guide you through this process step by step.

One important consideration when configuring Dragonfly is how much memory to allocate to each data type. By default, Dragonfly allocates equal amounts of memory to all data types. However, depending on your use case, it may make sense to allocate more or less memory to certain types of data.

Another consideration is how to handle persistence. While Dragonfly is an in-memory data store and doesn't rely on disk-based storage like Redis or Memcached do, it does offer options for persisting data in case of a failure or restart.

Overall, setting up and configuring Dragonfly may require some extra effort compared to other data storage solutions. However, the benefits in terms of speed and performance make it well worth the investment. With a little bit of optimization and configuration, you can take full advantage of everything that this powerful tool has to offer.

Simple Usage and Usage Plans

Despite its advanced features, Dragonfly is surprisingly easy to use. Its compatibility with Redis and Memcached APIs means that you can start using it right away with minimal setup required. Plus, Dragonfly offers a variety of usage plans to fit your needs and budget, making it accessible to businesses of all sizes.

Best Practices for Using Dragonfly in a Production Environment

While Dragonfly is a powerful and efficient data storage solution, using it in a production environment requires careful planning and execution. Here are some best practices to keep in mind when implementing Dragonfly in a production environment:

  1. Understand Your Data Access Patterns Before implementing Dragonfly, it's important to understand your data access patterns. This includes how frequently you need to write or read data, as well as the size of your datasets.

By understanding your data access patterns, you can optimize your use of Dragonfly and ensure that it meets your performance requirements.

  1. Monitor Resource Usage Since Dragonfly relies on in-memory storage, monitoring resource usage is critical for ensuring optimal performance. This includes monitoring CPU usage, memory usage, and network bandwidth.

By monitoring these metrics, you can identify any bottlenecks or performance issues and take action before they impact your application's performance.

  1. Implement High Availability High availability is critical for any production environment, and Dragonfly is no exception. By implementing high availability mechanisms like clustering and replication, you can ensure that your data remains accessible even in the event of hardware failures or other issues.

  2. Use Encryption and Authentication To protect sensitive data stored within Dragonfly, it's important to use encryption and authentication mechanisms. This includes encrypting data at rest as well as during transmission between clients and the Dragonfly instance.

Additionally, using authentication mechanisms like passwords or access keys can help prevent unauthorized access to your data.

  1. Regularly Back Up Your Data While Dragonfly offers options for persisting data in case of failure or restarts, regular backups are still recommended to ensure that your data remains safe and accessible.

By regularly backing up your data to an external system or offsite location, you can minimize the risk of losing important information due to hardware failures or other issues.

By following these best practices when using Dragonfly in a production environment, you can ensure that it meets your performance requirements while maintaining the security and integrity of your data.

Conclusion: Try Dragonfly Today

If you're looking for a fast, reliable, and feature-packed data storage solution, look no further than Dragonfly. With its lightning-fast speed, advanced features, and compatibility with Redis and Memcached APIs, it's the perfect choice for businesses of all sizes. So why wait? Try Dragonfly today and experience the power of in-memory data storage for yourself.

go to https://dragonflydb.io/

A Revolution in AI Chatbots(Chatting with GPT-4 )

· 8 min read
Alex Han
Software Engineer

image Chatting with AI chatbots has never been more engaging and natural than with GPT-4. GPT-4, or Generative Pre-trained Transformer 4, is the latest AI model developed by OpenAI, a leading research organization in the field of artificial intelligence.

What is GPT-4?

GPT-4 is an AI model that uses deep learning algorithms to generate human-like text. It is a language model that has been trained on a massive amount of data from the internet, including books, articles, and websites. The model is pre-trained on a diverse set of tasks, such as language translation, question-answering, and text summarization.

How was GPT-4 made?

image2 GPT-4 is the result of years of research and development by OpenAI. The model is based on the Transformer architecture, which was first introduced in the paper "Attention Is All You Need" by Google researchers in 2017. The Transformer architecture is a neural network that uses self-attention mechanisms to process input data.

To train GPT-4, OpenAI used a massive amount of data from the internet, including books, articles, and websites. The model was trained using unsupervised learning, which means that it learned from the data without any human intervention. The training process took several months and required a massive amount of computational resources.

How does GPT-4 compare to previous AI chatbot models?

GPT-4 is the latest in a series of language models developed by OpenAI, following the success of GPT-3 and other models such as BERT. While these models share some similarities, there are also significant differences between them.

Compared to its predecessor, GPT-3, which was released in 2020 and quickly became known for its impressive language generation capabilities, GPT-4 promises even more advanced natural language processing abilities. With a larger training dataset and improved architecture, it is expected to generate more coherent and contextually appropriate responses than any AI chatbot before it.

BERT (Bidirectional Encoder Representations from Transformers), on the other hand, is a model developed by Google that excels at natural language understanding tasks like question answering and text classification. While it may not have the same level of language generation capabilities as GPT-4 or GPT-3, BERT's ability to understand complex contextual relationships makes it an important tool for many AI applications.

Overall, while each model has unique strengths and weaknesses, GPT-4 represents a major leap forward in AI chatbot technology with the potential to significantly improve our interactions with machines.

Characteristics of GPT-4

GPT-4 is known for its ability to generate human-like text that is engaging and natural. The model can understand the context of the conversation and generate responses that are relevant and coherent. It can also generate text in multiple languages and styles, including formal and informal language.

GPT-4 is also capable of learning from new data and adapting to new tasks. This means that it can be fine-tuned for specific applications, such as customer service or personal assistants.

Prospects for Future Use

GPT-4 has the potential to revolutionize the way we interact with chatbots and virtual assistants. The model can be used in a wide range of applications, such as customer service, healthcare, and education.

In customer service, GPT-4 can provide personalized and engaging responses to customers, improving customer satisfaction and loyalty. In healthcare, the model can be used to generate patient reports and assist doctors in diagnosing diseases. In education, GPT-4 can be used to generate personalized learning materials and provide feedback to students.

Impact of GPT-4 on the Job Market

image3 As with any technological advancement, there are concerns about the impact that GPT-4 will have on the job market. In particular, some experts predict that the model may replace human workers in certain roles, such as customer service representatives.

GPT-4's ability to generate personalized and engaging responses could make it an attractive alternative to human customer service representatives. Companies may choose to implement the model as a cost-saving measure, reducing their reliance on human workers.

However, it is important to note that GPT-4 is not designed to replace human workers entirely. The model is still limited by its programming and training data and may not be able to handle complex or emotional situations as well as a human representative.

Furthermore, GPT-4's potential for improving customer satisfaction and loyalty means that it could create new job opportunities in other areas. For example, companies may need more skilled workers to manage and fine-tune the model for specific applications.

Overall, while there may be some disruption in the job market due to GPT-4's capabilities, it is unlikely that it will completely replace human workers in customer service and other similar roles. Instead, we can expect a shift towards more specialized and skilled roles in managing AI models like GPT-4.

Use Cases for GPT-4 in Industries Beyond Customer Service, Healthcare and Education

While the use cases for GPT-4 in customer service, healthcare and education are well established, the potential applications of this technology extend far beyond these areas. In fact, there are already a number of exciting use cases being explored in industries such as finance and marketing.

One area where GPT-4 could have a significant impact is finance. The model's ability to analyze large amounts of data and generate insights could be invaluable for financial institutions looking to improve their decision-making processes. For example, GPT-4 could be used to analyze market trends and predict future stock prices with greater accuracy than traditional methods.

In marketing, GPT-4 could be used to create more personalized and engaging content for consumers. The model's ability to generate natural language text that is tailored to individual preferences and interests could help companies stand out in a crowded marketplace. Additionally, GPT-4 could be used to analyze consumer data and provide insights into buying behavior, allowing companies to optimize their marketing strategies.

Overall, the potential applications of GPT-4 extend far beyond the industries mentioned in this document. As more organizations begin to explore the capabilities of this technology, we can expect to see new and innovative use cases emerge across a wide range of industries.

People's Reactions and Actual Usage

GPT-4 has generated a lot of excitement and interest in the AI community. Many experts believe that the model represents a significant breakthrough in natural language processing and AI.

However, there are also concerns about the potential misuse of the technology, such as the generation of fake news or the manipulation of public opinion.

Despite these concerns, GPT-4 has already been used in several applications, such as chatbots and virtual assistants. The model has been praised for its ability to generate engaging and natural conversations, and it is expected to become even more prevalent in the future.

In conclusion, GPT-4 represents a significant breakthrough in the field of AI chatbots. The model's ability to generate human-like text and adapt to new tasks has the potential to revolutionize the way we interact with chatbots and virtual assistants. While there are concerns about the potential misuse of the technology, the prospects for future use are promising.

The future development of language models beyond GPT-4

image4 While GPT-4 represents a significant leap forward in AI chatbot technology, research is already underway to develop even more advanced language models. One area of focus for researchers is improving the ability of language models to understand and generate contextually appropriate responses.

One approach that shows promise is the use of multimodal inputs, such as images and videos, to provide additional context for language models. By incorporating visual information into their understanding of language, these models could generate even more engaging and natural conversations.

Another area of focus is improving the ethical considerations surrounding the development and use of language models. As these models become more advanced and prevalent, it will be important to ensure that they are being used in ways that benefit society as a whole.

Finally, researchers are also exploring the potential for developing more specialized language models for specific industries or applications. For example, a language model designed specifically for medical applications could help doctors diagnose diseases or generate patient reports with greater accuracy than a general-purpose model like GPT-4.

Overall, while GPT-4 represents a major breakthrough in AI chatbot technology, there is still much room for further development and innovation in this field. As researchers continue to explore new approaches and techniques, we can expect to see even more advanced and sophisticated language models emerge in the years ahead.

Conclusion

All of the above articles were also written using the gpt language model. In this way, the era has arrived in whichcomputers create everything that has been created by humans, such as music, texts, videos, pictures, and coding. It seems that the era of making works with sincerity has almost passed. That role will now be replaced by computers, and how to use it in detail seems to be the focus.

ChatGPT Automate Task Using Python

· 2 min read
Alex Han
Software Engineer

배경

ChatGPT 에 대해 일론머스크가 ChatGPT 무섭다고 한 기사를 읽고 간단히 사용해 본 후기.

ChatGPT Automation

ChatGPT 란 무엇인가?

ChatGPT는 OpenAI가 개발한 프로토타입 대화형 인공지능 챗봇이다. GPT-3.5 언어모델 기반으로 만들어졌고 지도학습, 강화학습 모두 사용해 파인 튜닝되었습니다.(현재 기준이고 계속 발전될 걸로 보임.)

사용 예시

matplotlib

chatGPT 싸이트에 try chatgpt를 클릭해 바로 사용할 수 있습니다.(로그인을 해야 하므로 사전에 가입해야 함.) 로그인을 하고 나면 채팅창 같이 뜨는데 채팅 창에 plot a linear regression with Python using matplotlib 을 쳐 보았습니다.

chatgpt_matplotlib

import matplotlib.pyplot as plt

# create some fake data
x = [1, 2, 3, 4, 5]
y = [2, 3, 4, 5, 6]

# fit a linear regression model
slope, intercept = np.polyfit(x, y, 1)

# predict the y-values of a line with the fitted model
predictions = [slope * i + intercept for i in x]

# plot the data points and the fitted line
plt.scatter(x, y)
plt.plot(x, predictions)

plt.show()

위와 같이 개발을 위한 코드를 생성해 줍니다.

실제로 사용 가능한지 체크 해 보면 np 가 undefined 로 뜨지만 이 정도는 matplotlib, pandas, numpy를 써 봤다면 자주 보는 축약어 numpy의 np 임을 바로 유추할 수 있습니다. 그래서 numpy를 import해 실행해 보면 정상 동작함을 볼 수 있습니다.

matplotlib_codetest

send message

이번엔 whatsapp 으로 메시지를 보내도록 send a message on Whatsapp using Python and pywhatkit 쳐 봤습니다.

send_whatsapp

기존에 구글에서 검색해 스택오버플로우를 찾거나 공식 문서를 뒤적여 봐야 했지만 이제는 라이브러리를 어떻게 사용하는지까지 한줄 타이핑으로 알 수 있습니다.

scraping

이번엔 웹싸이트를 스크래핑을 위해 web scrape https://books.toscrape.com/ using Python and beautiful soup 를 쳐보자.(해당 명령은 잘 동작하지만 다른 웹 싸이트는 잘 동작하지 않을 수 있음. 사용 방법은 맞게 구현됨.)

webscrape_gpt

실제 코드에서 실행해 보면

webscrape

잘 동작합니다.

결론

무료로 배포되어 있기 때문에 구글 검색이 귀찮고 따분해진 사람들은 새로운 검색엔진 형태로 사용해 보는 것도 좋아 보입니다.(개인적으로는 마이크로소프트에서 만든 copilot 보다 나아 보임.) 인공지능의 자연어 처리 모델이 얼마나 발전한지 경험해 볼 수 있는 귀한 시간이었습니다.

How to Train and Optimize A Neural Network

· 5 min read
Alex Han
Software Engineer

배경

deep learning을 할 줄 아냐고 항상 부담이 된다. 이름부터 어려워 보이는 deep learning 에 대해 pytorch 라이브러리를 활용해 간단히 사용 방법을 알아보자.

Deep Learning

딥러닝은 머신 러닝의 하위 집합으로, 특히 사람의 뇌 구조와 기능에서 영감을 얻은, 알고리즘을 포함한 대량의 데이터를 다룹니다. 그래서 딥러닝 모델을 종종 심층 신경망이라고 부르는 것입니다.

Dataset

복잡하고 수 많은 작업이 필요한 데이터 전처리 과정들은 제외하기 위해 small iris dataset 활용.

Load Data with Data Loader

import warnings
warnings.filterwarnings("ignore")

import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, TensorDataset
from sklearn.model_selection import train_test_split

import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from IPython import display
display.set_matplotlib_formats("svg")

iris = pd.read_csv("https://gist.githubusercontent.com/netj/8836201/raw/6f9306ad21398ea43cba4f7d537619d0e07d5ae3/iris.csv")
iris.head()

head() 메서드를 활용해 dataset의 컬럼과 실제값을 간단히 확인합니다.

iris_head

딥러닝을 통해 예측할 목표는 variety 컬럼이다. 다른 4개의 컬럼에 따라 이 컬럼의 값이 변경된다고 보면 됩니다. 수식으로 생각하면 4개의 컬럼을 X, 목표로 하는 예측값을 y로 해서 생각해 봅니다.

X = torch.tensor(iris.drop("variety", axis=1).values, dtype=torch.float)
y = torch.tensor(
[0 if vty == "Setosa" else 1 if vty == "Versicolor" else 2 for vty in iris["variety"]],
dtype=torch.long
)

print(X.shape, y.shape)

위의 코드와 같이 불러온 데이터에서 variety 컬럼을 제거한 것을 X, variety 만 불러온 것을 y로 합니다. 이 때 variety 컬럼의 값들 중 Setosa를 0, Versicolor를 1, 나머지를 2로 변환해 줍니다.

Train / Test split

이제 training할 데이터들과 실제 이를 검증할 test 데이터들을 나누어 줍니다. 미리 import 해 둔 from sklearn.model_selection import train_test_split 라이브러리를 사용하면 됩니다.

X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=42)

train_data = TensorDataset(X_train, y_train)
test_data = TensorDataset(X_test, y_test)

train_loader = DataLoader(train_data, shuffle=True, batch_size=12)
test_loader = DataLoader(test_data, batch_size=len(test_data.tensors[0]))

print("Training data batches:")
for X, y in train_loader:
print(X.shape, y.shape)

print("\nTest data batches:")
for X, y in test_loader:
print(X.shape, y.shape)

Training

딥러닝을 위해 만들 모델은 input layer과 output layer를 연결하는 16개의 노드와 단일 hidden layer를 활용합니다.

class Net(nn.Module):
def __init__(self):
super().__init__()
self.input = nn.Linear(in_features=4, out_features=16)
self.hidden_1 = nn.Linear(in_features=16, out_features=16)
self.output = nn.Linear(in_features=16, out_features=3)

def forward(self, x):
x = F.relu(self.input(x))
x = F.relu(self.hidden_1(x))
return self.output(x)


model = Net()
print(model)

모델을 만들었으니 이제 training을 시작하는데 반복 동작하면서 오차를 줄여가는 과정을 수행하게 됩니다. Crossentropyloss를 사용해 오차를 추적하고 Adam으로 경사 하강 과정을 진행합니다.

num_epochs = 200
train_accuracies, test_accuracies = [], []

loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=model.parameters(), lr=0.01)

for epoch in range(num_epochs):
# Train set
for X, y in train_loader:
preds = model(X)
pred_labels = torch.argmax(preds, axis=1)
loss = loss_function(preds, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_accuracies.append(
100 * torch.mean((pred_labels == y).float()).item()
)

# Test set
X, y = next(iter(test_loader))
pred_labels = torch.argmax(model(X), axis=1)
test_accuracies.append(
100 * torch.mean((pred_labels == y).float()).item()
)

데이터가 작아 금방 완료되는데요. 반복 동작의 횟수 당 정확도가 어떻게 나왔었는지 시각화를 통해 보게 되면 아래와 같습니다.

fig = plt.figure(tight_layout=True)
gs = gridspec.GridSpec(nrows=2, ncols=1)

ax = fig.add_subplot(gs[0, 0])
ax.plot(train_accuracies)
ax.set_xlabel("Epoch")
ax.set_ylabel("Training accuracy")

ax = fig.add_subplot(gs[1, 0])
ax.plot(test_accuracies)
ax.set_xlabel("Epoch")
ax.set_ylabel("Test accuracy")

fig.align_labels()
plt.show()

visualize

Optimization

  1. 우선 트레이닝 과정을 아래와 같이 단순화합니다.
losses = []
def train_model(train_loader, test_loader, model, lr=0.01, num_epochs=200):
train_accuracies, test_accuracies = [], []
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=model.parameters(), lr=lr)

for epoch in range(num_epochs):
for X, y in train_loader:
preds = model(X)
pred_labels = torch.argmax(preds, axis=1)
loss = loss_function(preds, y)
losses.append(loss.detach().numpy())
optimizer.zero_grad()
loss.backward()
optimizer.step()

train_accuracies.append(
100 * torch.mean((pred_labels == y).float()).item()
)

X, y = next(iter(test_loader))
pred_labels = torch.argmax(model(X), axis=1)
test_accuracies.append(
100 * torch.mean((pred_labels == y).float()).item()
)

return train_accuracies[-1], test_accuracies[-1]


train_model(train_loader, test_loader, Net())
  1. layer 개수를 조절할 수 있는 새로운 모델 클래스를 생성합니다.
class Net2(nn.Module):
def __init__(self, n_units, n_layers):
super().__init__()
self.n_layers = n_layers

self.layers = nn.ModuleDict()
self.layers["input"] = nn.Linear(in_features=4, out_features=n_units)

for i in range(self.n_layers):
self.layers[f"hidden_{i}"] = nn.Linear(in_features=n_units, out_features=n_units)

self.layers["output"] = nn.Linear(in_features=n_units, out_features=3)

def forward(self, x):
x = self.layers["input"](x)

for i in range(self.n_layers):
x = F.relu(self.layers[f"hidden_{i}"](x))

return self.layers["output"](x)
  1. layer를 1~4개까지 증가시켜 보고 각 layer가 8, 16, 24, 32, 40, 48, 56개 노드를 각각 가졌을 때 어떻게 변화하는지 지켜봅니다.
n_layers = np.arange(1, 5)
n_units = np.arange(8, 65, 8)
train_accuracies, test_accuracies = [], []

for i in range(len(n_units)):
for j in range(len(n_layers)):
model = Net2(n_units=n_units[i], n_layers=n_layers[j])
train_acc, test_acc = train_model(train_loader, test_loader, model)
train_accuracies.append({
"n_layers": n_layers[j],
"n_units": n_units[i],
"accuracy": train_acc
})
test_accuracies.append({
"n_layers": n_layers[j],
"n_units": n_units[i],
"accuracy": test_acc
})


train_accuracies = pd.DataFrame(train_accuracies).sort_values(by=["n_layers", "n_units"]).reset_index(drop=True)
test_accuracies = pd.DataFrame(test_accuracies).sort_values(by=["n_layers", "n_units"]).reset_index(drop=True)
test_accuracies.head()

accuracy

결과를 보면 테스트 정확도는 layer가 몇 개일 때 node가 몇 개일 때 어떤 양상을 보였는지 알 수 있고 test_accuracies[test_accuracies["accuracy"] == test_accuracies["accuracy"].max()] 를 통해 이 중 가장 정확도가 높은 layer, node 개수를 선정할 수 있습니다.

결론

iris dataset은 굉장히 적은 데이터이고 training이 굉장히 잘 되는 데이터이기에 이 과정이 무의미해 보이도록 정확도가 대부분이 정확도가 높게 나왔습니다.

하지만 실제 업무에서는 이처럼 깨끗하지 않고 깨끗하더라도 경향성이 일관되지 않아 그 수식을 찾아내기 힘든 경우가 대다수기 때문에 이와 같은 과정들이 필요합니다. layer, node 개수를 조정해 보는 것 외에도 변수들을 조절해 모델에 변화를 주거나 training 방식의 변화를 주어 optimization을 한다면 보다 의미있는 값들을 도출할 수 있을 거라 생각합니다.

이 글을 쓰면서 딥러닝에 좀 더 자신감을 가질 수 있으면 좋겠다 나도..;

Tech Lead In Start-up(스타트업에서의 테크리드 역활에 대한 고찰)

· 7 min read
Alex Han
Software Engineer

배경

많은 회사를 거치진 않았지만 작은 스타트업에서 개발해 왔기 때문에 직접 기술적인 리딩을 할 경험도 주변에서 리드하는 모습도 가까이서 볼 수 있었는데, 그 역할에 대한 생각을 적어보려고 합니다.

Tech Lead

테크리드란 무엇인가?

테크 리드란 개발 리더, 테크니컬 리더, 리드 프로그래머, CTO 등 회사마다 부르는 용어는 다르지만 그 의미는 개발자들의 매니저로서 개발 분야 기술의 책임자입니다.

역할의 범위는 어디까지?

작은 스타트업에 주로 근무할 때를 떠올리면, 경영진과 개발자들 모두 처음에 원했던 테크 리드의 역할은 사내에서 사용하는 모든 기술을 다 이해하고 직접 구현할 수도 있으면서 개발 시 어려운 부분이 있다면 해결사 역할도 해주고 조언도 해주며 경영진과 개발자 간의 불편한 소통을 대신해 주어 기술적 난이도를 정확히 말해주고 일정 조율도 해주는 것이었습니다. 이 많은 역할을 경험이 많아서 입사한지 한 달 안에 프로처럼 처리해 주는 모습을 기대합니다.

위와 같은 기대는 사실 허상이기도 하고 업무를 떠넘기고자 하는 욕구이기도 합니다.

자연스러운 스타트업에서의 테크리드!

작은 스타트업이면서 개발 인력이 적은 곳에 개발자들은 1명이 여러 명의 역할을 해줘야 합니다.

예를 들어 백엔드 개발자는 백엔드 서버는 물론 클라우드, 인프라 개발, 데이터베이스 관리, 데이터 엔지니어링, 분석, 크롤러, CI/CD, 문서 작성, 스케줄 관리, BtoB 미팅, 디자인/기획 협의 등 사실 선임 개발자고 밑에 개발 인원이 조금 더 있다면 자연스럽게 후임 개발자들과 업무를 나누며 백엔드 테크리드들이 하는 역할을 직접하게 됩니다.

거기에 더해 인력이 더 적을 시는 프론트엔드, 앱 개발도 병행해서 해야 되는 상황도 발생할 수 있습니다.

그렇게 시간이 지나 개발자들 간의 체계가 잡히기 시작하면 깃허브 이슈 관리, 코드 리뷰, 테스트 프로세스, 개발 컨벤션, 개발/배포 절차 등 많은 정의들을 테크 리더가 만들고 이에 맞춰 개발자들은 문서와 개발이 일체화 되어 가며 하는 일이 명확해집니다.

이렇게 되면 리드를 당하는 개발자들은 자기 개발만 신경쓰고 타 개발자들이 문서화하고 이슈, 태스크 등에 등록한 자료들을 통해 자료도 쉽게 찾아 사용할 수 있는 효율화된 개발 조직이 됩니다.

자연스럽게 테크리더들은 경영진들과 논의하며 다음 비지니스들에 대해 기술적 연구를 하고 개발자들과 상의해 가며 일정을 산출하고 매니징하기 시작하고 회사에서는 경험이 풍부한 시니어 개발자들을 더 채용하고 테크리더들은 소통하며 더 좋은 문화를 만들어 갑니다.

부자연스러운 스타트업에서의 테크리드?

**"자연스러운 스타트업에서의 테크리드"**는 기존 개발자들을 더욱 규합하고 서비스 운영 노하우와 개발 경험들울 가지고 시작할 수 있습니다.(다음에 들어오는 사람들에 대한 배려와 포용할 수 있는 개발 문화를 가지지 않으면 문제가 되겠지만...)

보통 초기 스타트업은 돈이 많지 않기 때문에 개발자는 주니어를 채용하고 이조차도 많은 인원을 채용하지도 않으며 기획자, 디자이너, 마켓터 등 개발을 원하는 비개발자들을 채용합니다.

주니어 개발자들은 열심히 서비스를 구현하고 런칭까지 성공시키면서 빠르게 성장하지만 당연하게도 개발 속도가 느렸을 거고 자신 없는 부분들을 드러내기도 했을 겁니다.

그러나 성공적인 런칭과 운영을 하며 다른 기술들을 공부하고 그 회사에 맞는 개발자들 간 프로세스들을 조금씩 확립해 나갈 즈음, 소수 인원의 개발자들은 힘에 부치기도 하고 잘하고 있는지 불안하기도 한 마음에 테크리드 채용을 원하기도 하고, 경영진들은 주니어 개발자들을 돕기 위해서 혹은 신뢰하지 못해서 "역할의 범위는 어디까지?" 에서 말한 허상에 가까운 테크리드 채용에 기대를 걸기 시작합니다.

그렇게 기대를 한 몸에 받고 들어온 테크리드들은 그 동안 구현해 놓은 개발 산출물들을 공부할 시간이 필요하지만 빠른 적응을 요구하기 시작합니다.(문서화가 많이 되어 있지 않고 개발 상태가 좋지 않을수록 더 많은 시간이 소요됨.) 그 동안 자연스럽게 테크리드를 해 오던 성장한 개발자들은 본인이 하던 일은 계속 하면서 테크리더로 들어온 인원에게 지속적인 교육까지 하게 되면서 일은 더 가중됩니다.

경력과 경험이 많은 테크리드 분이라도 사내에서는 그가 서비스 구성과 개발 현황, 기술들을 파악할 시간을 주어야 하는데 서비스가 빠르게 변화해야 하는 스타트업에서 이를 기다려주기는 힘듭니다. 묵묵히 파악하며 기술에 대한 조언 정도를 해야 하는 상황에 현재의 스케줄 조율과 이슈 트랙킹 등 많은 것들을 과중하기 시작하면서 제대로 파악하지 못한 상태는 지속되고 이런 상태로 선택과 조율, 협의가 계속 잘못 이뤄져 갑니다.

반대로 경험 많은 시니어 개발자를 테크리드로 채용했다면 높은 급여를 보장해 줘야 했을 것이기 때문에 테크리드를 위한 업무 파악을 시키는데 있어서도 회사적으로도 큰 손실이 됩니다.

차라리 위에서 말한 "자연스러운 스타트업에서의 테크리드" 의 리더가 경험 많은 시니어 개발자와 의견을 조율해 가며 기존 팀원들과 소통을 잘 해가며 업무 조율을 하는 것이 유리하다고 생각이 듭니다. 아니면 시니어 개발자로 채용해 어느정도 업무를 같이 해보고 추후 하고자 하는 의지와 역량에 따라 주변 개발자들과 조율해 이를 결정한다면 보다 나은 방법이 될 수도 있습니다.

테크리드에 대한 경험

첫 IT 회사에서 앱 개발자, 백엔드 개발자, 프론트엔드 개발자(나)로 개발자 3명이 있었는데 백엔드 개발자 분이 그 회사의 그나마 테크리드로서 역할을 했었습니다.

10년 넘는 경력이 있던 그 분은 react에 대한 경험이 적지만 최신 기술이라는 이유로 react를 도입했는데 연구 없이 도입해 사용 방법에 대해 잘 몰랐고 내가 입사하기 전에는 2명이서 개발을 해왔기 때문에 누군가에게 공유하는 일 또한 해보지 않았던 분이라 문서도 공유도 교육도 없는 분 이었습니다.

지금 생각해 보면 개발자로서 누군가한테 의지하지 말고 스스로 공부해서 처리해야 하는구나를 호되게 배웠던 회사였기에 어느 정도 개발자 인생에 대해 리드를 해주신건가 싶기도 합니다.

두번째 IT회사는 전체 인원이 같은 날짜에 입사해 모두 신입인데 첫 서비스를 만들기 위해 미션 수행을 하던 회사였는데, 이 회사에 같이 시작했던 빅데이터 대학원생, 데이터 엔지니어, 디자이너, 웹개발자 4명이서 개발을 시작했습니다.

이 중에서 그나마 서버 개발을 해 봤고 react 개발 경험이 있어 react-native 개발을 통해 앱도 개발할 가능성이 있었던 제가 자연스럽게 테크리드 역할을 맡게 됐습니다. 처음으로 인원들 마다의 업무를 분배하고 업무 난이도를 고려해 일정도 산출하고 그 스케줄을 조율하면서 문제가 있을 때마다 같이 고민하며 해결하는 재미가 있었습니다.

이 때 가장 힘들었던 건 부족한 경험에서 오는 자괴감이었습니다. 같은 직급이었지만 팀원들 모두 리딩에 잘 따라와 줬는데 클라우드, react-native, react 초기 설정, 백엔드 초기설정 등 모추 처음이었기 때문에 시간이 오래 걸렸고 여유가 없었습니다. 게다가 일정을 맞춰야 했기 때문에 분배한 업무가 잘 되지 않았을 때 동료들에게 가이드 해주거나 교육해 주지 않고 혼자서 업무 처리를 해 버렸습니다.

지금 생각해 보면 동료들의 성장과 나의 성장에도 회사를 위해서도 잘했던 행동은 아닌 것 같습니다. 하지만 여유가 없었던 주니어로서 리드 역할을 맡았으니 최선을 다 했던 걸로...

그 다음 회사에서는 입사하고 업무 교육을 잠시 받던 중 1달 동안 개발자들이 이해관계와 정치 싸움에 의해 모두 퇴사해 혼자 남았습니다. 6개월 동안 회사에 혼자 남아 개발 현황을 파악하며 유지보수하고 운영하고 업그레이드 했습니다. 그 동안 서비스를 위해 필요한 계정정보들부터 현재의 개발 상태, 배포 구성 등 운영과 개발에 필요한 문서들을 작성하기 시작했고 6개월 뒤 CTO로 입사한 분에게 이를 모두 인계 했습니다.

하지만 CTO 분은 입사하고 인계한 내용에 대해 공부하고 싶어하지 않았고 본인이 아는 스택으로 모두 재편하고자 했는데 이에 대해 데이터베이스 마이그레이션부터 개발 스택 선정에 대한 연구도 없이 강행 했습니다. 그 분의 리드 방식은 자신이 무엇을 하겠다는 말을 주변에 말하지 않고 혼자 갑자기 신기술에 대해 공부한 것이 있다며 실 서비스에 일정 부분 도입하고 전체 서비스에 팀원들이 마무리 해주길 원했습니다.

기존 서비스에 대해 전혀 파악하지 않고 신기술 적용 공부하듯 서비스에 도입하다 보니 ERD도 없고 데이터 마이그레이션도 하지 않아 기존 3년 간 모은 데이터들은 모두 손실됐고 비지니스와 직결됐던 배정 기능까지 문제가 생기면서 큰 문제들이 계속 발생됐고 발생한 문제들은 전체 그림을 공유하지 않았기 때문에 그 리더분에 의해서 해결되길 모두 기다리는 상황이었습니다.

그 CTO분은 결국 쫒겨나듯 나가는데 알고 보니 처음 말했던 거의 10년 경력은 실제로는 5년도 안되는 경력이었고 그 조차도 프론트엔드 경력이 대부분이었습니다. 거의 사기를 당한 경영진은 이후 CTO는 신중히 뽑겠다 했지만 결국 갑자기 게임업계 DBA로 계시던 분을 CTO로 영입하더니 모든 것을 인계해서 그 분을 구심점으로 돌아가길 원했습니다.

하지만 그 CTO분은 게임업계 개발 스택과 웹, 앱 개발 쪽의 개발 스택이 전혀 달랐고 DBA로서 DB만 보시던 분이었는데 역시 얼마 지나지 않아 게임업계로 돌아갔습니다.

시간이 지나 많은 분들이 입사하고 여유가 생기면서 개발도 하면서 자연스럽게 테크리드 업무들을 하게 됐습니다. 개발 과제에 대해 분석하고 타 팀과 협의 후 개발 팀 내 인원 분배, 일정 조율 등 전반적인 매니징을 진행하게 됐고, 또한 신기술 연구, 개발 스택 비교 등을 동료들에게 공유하고 함께 소통하며 근거 있는 개발 스택 선정을 했고 자연스럽게 팀원들과 소통해 깃허브 정책, 코드 컨벤션 등 개발 문화를 선진적으로 이끌 수 있었습니다.

이와 같은 경험들을 토대로 프리랜서로서도 테크리드 업무를 담당해 보다 나은 개발 정책과 문화를 자리 잡을 수 있도록 노력하고 있습니다. 이를 통해 크게 보고 매니징하며 더 여유있게 기술적으로 리딩할 수 있도록 더욱 성장하고자 하는 동기부여가 되는 것 같습니다.

마지막은 자기자랑처럼 끝나긴 했지만 아직 부족한 부분이 많다는 것은 리딩을 할수록 계속 느끼고 있고 꾸준히 공부하며 정말 팀에 필요한 것들을 찾아내 적재적소에 도입하고 모두와 좀 더 잘 소통하는 좋은 리더가 될 수 있게 노력할 것입니다.