Skip to main content

Posts

Showing posts with the label database

Data Migration Strategies for MySQL databases

Data Migration Strategies for MySQL Databases: An Overview of few Approaches Introduction: Data migration is a crucial aspect of database management, especially when transitioning from one MySQL database to another. It involves transferring data while ensuring its integrity, consistency, and availability. This article explores the challenges associated with data migration and provides detailed solutions along with sample code snippets. Challenges in Data Migration: Data Consistency: Ensuring that data remains consistent during the migration process is critical. Any discrepancies or errors can lead to incorrect results and data loss. Downtime: Minimizing downtime is essential to maintain business continuity. Prolonged downtime can disrupt operations and affect user experience. Schema Changes: When migrating to a new database version, the schema might change. Adapting the data to fit the new schema while preserving relationships can be complex. Data Volume: Handling large

RIAK Database - A Distributed NoSQL Solution

Exploring Riak Database: A Distributed NoSQL Solution Introduction In the world of modern technology, data storage and retrieval have become crucial aspects of software development. Traditional relational databases often struggle to handle the scale and distribution demands of today's applications. This is where Riak, a distributed NoSQL database, comes into play. In this article, we will delve into the problem that Riak database solves and explore how it addresses these challenges with real-world examples. The Problem: Scalability and Fault Tolerance Traditional relational databases are built around a fixed schema and a single server. As data grows, these databases might struggle to handle the load efficiently, leading to performance bottlenecks and scaling challenges. Moreover, relying on a single server can be risky – if that server fails, data might be lost or unavailable. Riak's Solutio

How to make table schema changes and restore data after appropriate transformation

Introduction In many cases we might have to encounter scenarios where we need to perform backup of data or perform a schema change like data type transformation etc. This blog post illustrates the steps that can be used as a checklist to perform the operation at ease Steps The below sql query creates a new table with the same schema as given in the like field create table <table_name>_old like <table_name>; Now that we have the schema ready, we can copy the data for backup using the below command insert into <table_name>_old select * from <table_name>; Once the above command succeeds, we can drop the old table drop table <table_name>; Then we can create the new table with the same name as the one given in <table_name>   so that we can allow applications to still use the same table name. CREATE TABLE `<table_name>` (   `id` bigint NOT NULL AUTO_INCREMENT, ...); Now that the table with new schema is ready, we can migrate the data

using mysqldump command to backup the data from MySQL Database table

Introduction In this blog post, we will take a look at the mysqldump command that is shipped as part of the MySQL clients. Data Backup We normally deal with backup of data by exporting the data as JSON / CSV or as SQL Files. This is required so that any change that we are working on might have a impact, we can immediately use the backup to fix any unexpected database operation. When working with JSON / CSV at times, we have to deal with the issues in the data formats. Further as SQL queries, we get the option to rewrite the query, insert them into a local database, perform the required transform or operations and then move to the server. The MySQL Workbench provides the export option through which we can generate the SQL scripts, but there are issues like the target table names are empty and there is a new insert statement for each row in the table. Example Command mysqldump.exe --host=servername --port= 3306 --default-character-set=utf8 --user=(db_user_name) -p --protocol=tcp --colu

Strategy for Database choice in a multi-tenant application

Introduction Which database strategy do I choose from and what are the options on the table? This post aims to help the developer / architect to choose the database strategy for a multi-tenant application In a multi-tenant application, there are many areas or concerns that require attention like the cost, data isolation and protection, maintenance, and disaster recovery. Multi-tenant solutions can opt any strategy from one database per tenant (fully isolated) to one row per tenant (shared database) or to have separate schema per tenant in the same database (Shared server, separate schemas).

Data Security through Physical Isolation in Multi-Tenant SaaS Applications

There exists a notion that in order to achieve the better security, we should opt for the physical isolation of databases This is not true in reality, we can design using the right patterns so that we can still have the single table which have all the tenant information and still use the row level security pattern to restrict the data access by the tenant. However, this has to be done for a MVP and based on the various parameters, this design is to be revisited and incase of having a tenant that has a very huge volume of data, in that case, we can migrate the tenant data to a separate database. The migration of the data might be taking more time, so there should be a suitable connector that is built so that the tenant filter is applied during the data migration in the appropriate value. The approach of having a single database is not applicable when there are GDPR or other HIPPAA compliances to be enforced.

Implementing Row Level Security [RLS] for a Multi-Tenant SaaS Application

Row Level Security The need for row level security stems from the demand for fine-grained security to the data. As the applications are generating vast amounts of data by the day. Application developers are in need of making sure that the data is accessible to the right audience based on the right access level settings. Even today, whenever an application was built, the application development team used to spend a lot of time researching the approach, implementing multiple tables multiple logics 25 queries to add filters to manage the data security for every query that gets transferred from the end user request to the application database. This approach requires a lot of thought process, testing and security review because the queries needs to be intercepted, updated and the data retrieval to be validated to make sure the end-users see only the data that they are entitled to. Implementation With the advent of of row level security feature being rolled out in main d

Restoring a Database using bak files and SQL Scripts alone

The following is the script to restore any database from a .bak file without using Sql Server Management Studio -- The database name in the restored bak file should be the same as the one given here RESTORE DATABASE [authserver] FROM DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.LOCAL\MSSQL\Backup\database.bak' WITH FILE = 1, MOVE N'authserver' TO N'C:\Program Files\Microsoft SQL Server\MSSQL11.LOCAL\MSSQL\DATA\database.MDF', MOVE N'authserver_LOG' TO N'C:\Program Files\Microsoft SQL Server\MSSQL11.LOCAL\MSSQL\DATA\database.LDF', NOUNLOAD, REPLACE, STATS = 10 GO