CSQL as Multiple bidirectional cache nodes for single data source

CSQL Cache is an open source high performance, bi-directional updateable data caching infrastructure for any disk residence database that sits between the application process and back-end to provide unprecedented high throughput to your application.

csql_fig1

CSQL cache accelerate application performance at the data tier by sitting in the clustered middle tier servers . In multiple bidirectional cache, most frequently used table are cached to CSQL which is connected to the clustered application, by the table loader module. Any change made in application layer directly reflects to all other CSQL cache nodes as well as target database. For any kind of DML operation on non cached table, CSQL provides gateway to directly access that table from target database. Application doesn’t have any information whether their operation is held at CSQL or target database.

Any changes in target database on cached table is propagated to all CSQL cache nodes so that application connected to any CSQL node gets consistent data . To achieve this, CSQL maintains a log in target database which keeps track of all operation on cached tables as well as number of cached nodes running currently. Triggers are installed in the target database for all the DML operations on cached table to generate log entires in the log table.

Bi-Directional Cache Settings for MySQL as target database

Before start for Bi-Directional setting make sure that target database as MySQL ,Unixodbc and mysql-connector are installed.If not,then install first.

Let us consider there are two CSQL cache node and a target database as MySQL .To set multiple Bi-Directional caching, First create the table in MySQL to hold the log records using the SQL statement below using mysql tool or isql tool.

CREATE TABLE csql_log_int (tablename CHAR(64), pkid INT, operation INT, cacheid INT, id INT NOT NULL UNIQUE AUTO_INCREMENT) engine=’innodb’;

Make changes in csql.conf file CACHE_ID = 1 for one cache node and CACHE_ID = 2 in other cache node and make sure that CACHE_TABLE , ENABLE_BIDIRECTIONAL_CACHE are set to true . Again DSN should set to myodbc3 for MySQL.

Lets say for a cached table p1 with primary key f1 ,write a trigger(trigger.sql) as below

use test;
drop trigger if exists triggerinsertp1;
drop trigger if exists triggerupdatep1;
drop trigger if exists triggerdeletep1;

DELIMITER |
create trigger triggerinsertp1
AFTER INSERT on p1
FOR EACH ROW
BEGIN
Insert into csql_log_int (tablename, pkid, operation,cacheid )values (‘p1’, NEW.f1, 1,1);

Insert into csql_log_int (tablename, pkid, operation,cacheid )values (‘p1’, NEW.f1, 1,2);
End;
create trigger triggerupdatep1
AFTER UPDATE on p1
FOR EACH ROW
BEGIN
Insert into csql_log_int (tablename, pkid, operation, cacheid ) values (‘p1’, OLD.f1, 2,1);
Insert into csql_log_int (tablename, pkid, operation,cacheid ) values (‘p1’, NEW.f1, 1,1);

Insert into csql_log_int (tablename, pkid, operation, cacheid ) values (‘p1’, OLD.f1, 2,2);
Insert into csql_log_int (tablename, pkid, operation,cacheid ) values (‘p1’, NEW.f1, 1,2);
End;
create trigger triggerdeletep1
AFTER DELETE on p1
FOR EACH ROW
BEGIN
Insert into csql_log_int (tablename, pkid, operation, cacheid )values (‘p1’, OLD.f1, 2,1);

Insert into csql_log_int (tablename, pkid, operation, cacheid )values (‘p1’, OLD.f1, 2,2);
End;
|

Here for different table replace table name with p1 and primary field name with f1.

After writting the trigger.sql as per requirement execute it as below

$ mysql -u root -p <trigger.sql

Now start CSQL server by csqlserver -c and execute csql -g in both cache nodes . Check by DML operaton in target and CSQL cache node multiple Bi-Directional caching.

Product Page

http://www.csqldb.com

http://www.csqlcache.com

free invisible web counter

Advertisements

User Process crash dump file creation.

Most of the time, it has been observed that the user process stops unexpectedly. It is difficult to conclude the root cause  by analyzing application log. In that case, there may be a chance that process gets crashed due to invalid read/write access violation. It is also known as segmentation fault.

Some time, it is difficult to understand why segmentation fault occur. To do analysis, you may need to debug dump(core) file. By default system will not create dump(core) file whenever the crash occur. User has to configure system so that whenever a user process crashed, OS will create dump file. Here is the steps to configure system to get dump(core) file whenever user process crash.

LINUX

In the Linux system, user has give below command in root privilege.

$ ulimit -c unlimited.

By default, core file name will be core for each process. To give proper name to core file with process name, PID etc., User has to modify “/proc/sys/kernel/core_pattern” . For information, refer “man core” page.

WINDOWS

To Enable the dump file creation whenever user process crash user has to create below registry using administrator privileges.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps 

User has to follow below steps to create above entry in registry.

  • Create a file with name as “dump.reg”.
  • Open “dump.reg” with notepad.
  • Add below content and save it.

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps]
“DumpFolder”=hex(2):44,00,3a,00,5c,00,64,00,75,00,6d,00,70,00,00,00
“DumpCount”=dword:00000010
“DumpType”=dword:00000002
“CustomDumpFlags”=dword:00000000

  • Execute “dump.reg”. It will create a registry as “LocalDumps”. which can be confirmed by “regedit.exe” command.
  • Create a folder in C:\CrashDumps.

After above configure, if any user process crash, the dump file will be created in “C:\CrashDumps” location so that user can do analysis to find cause of crash.

FIPS with openssl for linux

The 140 series of Federal Information Processing Standards (FIPS) are security standards that specify requirements for cryptography modules which developed by two government bodies. One is the National Institute of Standards and Technology in the United States and other is the Communications Security Establishment in Canada.

Generally FIPS comes with openssl but it is not certified. certified fips build available with every fips release which come nearly one time per year.

To get fips module from openssl build use following compilation option.

$./config fips fipscanisterbuild
$ make
$ make test
$ make install

After finishing above compilation command, you can find all libraries and executables which is available in default location /usr/local/ssl/fips.

For More Info visit:www.openssl.org

CSQL as Multiple Bi-directional Cache Node for MySql

CSQL Cache is generic database caching platform to cache frequently accessed tables from your existing open source or commercial database management system (Oracle, MySQL, Postgres etc) close to application tier. It uses the fastest Main Memory Database (CSQL MMDB) designed for high performance and high volume data computing to cache the table and enables real time applications to provide faster and predictive response time with high throughput.

Supported Feature Summary

* Full table

* Partial table

* Read only table

* Bi-directional table updates

* Transparent Caching

* Synchronous, Asynchronous Update Propagation Modes

* High Availability

* And More

For the Multiple bi-directional CSQL cache for MySql as single data source, Be sure that MySql database is installed and currently running. First create the log table in MySql to hold the log records using the SQL statements given below using tool or isql tool.

CREATE TABLE csql_log_int (tablename CHAR(64), pkid INT, operation INT, cacheid INT, id INT NOT NULL UNIQUE AUTO_INCREMENT) engine=’innodb’;

Let us consider there are two CSQL cache node. Make changes in csql.conf file CACHE_ID = 1 for one cache node and CACHE_ID = 2 in other cache node and make sure that CACHE_TABLE, ENABLE_BIDIRECTIONAL_CACHE are set to true. For same machine change SYS_DB_KEY and USER_DB_KEY values in both of nodes. Again DSN should set to myodbc3 for MySql . Set “/etc/odbcinst.ini” and “/etc/odbc.ini” file properly show that isql tool will work properly.

Lets a table “t1” having primary key “f1″ integer and f1 char to be cached. Create trigger (say trigger.sql) as per following format for MySql database.

drop trigger if exists triggerinsertt1; 
drop trigger if exists triggerupdatet1;
drop trigger if exists triggerdeletet1;
DELIMITER |
create trigger triggerinsertt1
AFTER INSERT on t1
FOR EACH ROW
BEGIN
Insert into csql_log_int (tablename, pkid, operation,cacheid )values (’t1′, NEW.f1, 1,1);
Insert into csql_log_int (tablename, pkid, operation,cacheid )values (’t1′, NEW.f1, 1,2);
End;
create trigger triggerupdatet1
AFTER UPDATE on t1
FOR EACH ROW
BEGIN
Insert into csql_log_int (tablename, pkid, operation, cacheid ) values (’t1′, OLD.f1, 2,1);
Insert into csql_log_int (tablename, pkid, operation,cacheid ) values (’t1′, NEW.f1, 1,1);
Insert into csql_log_int (tablename, pkid, operation, cacheid ) values (’t1′, OLD.f1, 2,2);
Insert into csql_log_int (tablename, pkid, operation,cacheid ) values (’t1′, NEW.f1, 1,2);
End;
create trigger triggerdeletet1
AFTER DELETE on t1
FOR EACH ROW
BEGIN
Insert into csql_log_int (tablename, pkid, operation, cacheid )values (’t1′, OLD.f1, 2,1);
Insert into csql_log_int (tablename, pkid, operation, cacheid )values (’t1′, OLD.f1, 2,2);
End;
|

Note that in above triggers, for each operation it inserts two logs into the log table, one for cache node-1 and another for cache node-2. After execution of the below command, triggers are installed on the t1 table.

$ mysql -u root -p

Run two cache node server identified by cache id 1 and 2 and cache t1 tables in both the node by using following tool.

$ cachetable –U root –U manager –t t1

CSQL: NetBeans Configuration for CSQL JDBC DRIVER

CSQL JDBC driver supports connection through driver manager, data sources object, connection pooling data source object. For Netbeans IDE, you need to configure properly for CSQL database.

Connection Through DriverManager

For the connection through DriverManager object, you need to start the CSQL server in one terminal. In another terminal run . ./setupenv.ksh in csql root directory , then go to Netbeans /bin directory and run ./netbeans . Create a java applications with following code.

public class Main {

public static void main(String[] args) {

try {

Class.forName(“csql.jdbc.JdbcSqlDriver”);

Connection con = DriverManager.getConnection(“jdbc:csql”, “root”, “manager”);

if(con!=null){System.out.println(“Connection Exstablished”);}

else {System.out.println(“Connection failed”);}

Statement cStmt = con.createStatement();

cStmt.execute(“CREATE TABLE T1 (f1 integer, f2 char (20));”);

System.out.println(“Table T1 is created”);

cStmt.close();

con.close();

}catch(Exception e) {

System.out.println(“Exception in Test: “+e);

e.printStackTrace();

}

}

Go to the projects ->libraries, right click on it and go Add JAR/Folders…Give the jar file path and click on OK . Now run Applications.

Connection Through DataSource

For DataSource Configuration, set as mention above with a web application. For example use the following jsp code.

<html>

<head>

<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″>

<title>JSP Page</title>

</head>

<body>

<%

try{

out.println(“Table created on csql “);

javax.naming.Context cxc= new javax.naming.InitialContext();

javax.sql.DataSource ds = (javax.sql.DataSource) cxc.lookup(“jdbc/bijaya”);

java.sql.Connection conn=ds.getConnection(“root”, “manager”);

out.println(“Table created on csql “);

java.sql.Statement stmt=conn.createStatement();

stmt.execute(“CREATE TABLE papu (f1 int,f2 int);”);

out.println(“Table created on csql “);

conn.close();

} catch (java.sql.SQLException e){

out.println(“An error occurred.”);

}

%>

</body>

</html>

Start application server, go to ‘application sever admin console‘ for connection pool setup. Go to application sever, click on JVM settings, from their in Path Settings set Classpath Prefix and Native Library Path Suffix

Now go to Resources ->JDBC->Connection Pools, create new connection pool with additional propertics with URL as csql jdbc url, user, password properties and Datasource Classname as csql.jdbc.JdbcSqlDataSource. Now save and ping for successful connection. Create a JDBC Resources name.

Now run the web application.

free invisible web counter

ODBC Architecture

The ODBC driver fits in as a layer of “middleware” in the ODBC architecture, which includes the following components:

Application

An ODBC application is any program that calls ODBC functions and uses them to issue SQL statements. Many vendors have added ODBC support to their existing Windows-based tools so those tools can use ODBC for data access.

ODBC Driver Manager

On Windows platforms, the ODBC driver manager is a Microsoft-supplied dynamic-link library (DLL). On UNIX platforms, the ODBC driver manager is avendor-supplied shared library. The driver manager routes calls  from an application to the ODBC driver. To an application, the ODBC driver manager and the driver are a single entity that processes requests to the database. The ODBC driver manager loads the requested driver in response to an application’s call to the  ODBC SQLConnect or SQLDriverConnect functions.

ODBC Driver

An ODBC driver is a dynamic link library (DLL) or a shared library that processes ODBC function calls for a specific data source. The driver connects to the data source, translates the standard SQL statements into syntax the data source can process, and returns data to the application. There are ODBC drivers for every major database system.

Data Source

A data source is a combination of a database system, the operating system it uses, and any network software required to access it. ODBC defines a database system (DBMS) as any vendor’s implementation of a data access system that provides an SQL interface.

Posted in ODBC. 1 Comment »

Catalog tools in CSQL

Catalog is a tool in CSQL, which provides the information about system metadata and user metadata of tables stored in the CSQL database.It is also provides information about table,index,field etc

Usage: catalog [-u username] [-p passwd] [-l] [-i] [-d] [-T table] [-I index] [-D <lock|trans|proc|chunk>]
l -> list all table with field information
i -> reinitialize catalog tables. Drops all tables.
d -> print db usage statistics
T -> list table information
I -> list index information
D -> print debug information for system tables
Note: If multiple options are specified, last one will be considered.

If the user name is not mentioned then it will list all the tables with only their names.If multiple options are specified then only the last option is considered for processing.

Let us discuss above option briefly one by one with example

$ csql
CSQL> create table t1(f1 int, f2 char(20), f3 float);
Statement Executed
CSQL>create table emp(eid int, name char(20), sal float);
statement Executed
CSQL>quit;
$

We have created two tables with name as t1 and emp .then only catalog tool give the table information.

$ catalog

This is a default behavior as mentioned before since there is no username provided.
<TableNames>
<TableName> t1 </TableName>
<TableName> emp </TableName>
</TableNames>

$ catalog -u root -p manager -l

This will gives Table information of all the table .Information like field name,type,length,constraint info,default value etc. of all field of all tables.

<Table Information of all tables>
<TableInfo>
<TableName> t1 </TableName>
<FieldInfo>
<FieldName> f1 </FieldName>
<Type> 0 </Type>
<Length> 4 </Length>
<Primary> 0 </Primary>
<Null> 0 </Null>
<Default> 0 </Default>
<DefaultValue> </DefaultValue>
</FieldInfo>
<FieldInfo>
<FieldName> f2 </FieldName>
<Type> 30 </Type>
<Length> 21 </Length>
<Primary> 0 </Primary>
<Null> 0 </Null>
<Default> 0 </Default>
<DefaultValue> </DefaultValue>
</FieldInfo>
<FieldInfo>
<FieldName> f3 </FieldName>
<Type> 11 </Type>
<Length> 4 </Length>
<Primary> 0 </Primary>
<Null> 0 </Null>
<Default> 0 </Default>
<DefaultValue> </DefaultValue>
</FieldInfo>
</TableInfo>
<TableInfo>
<TableName> emp </TableName>
<FieldInfo>
<FieldName> eid </FieldName>
<Type> 0 </Type>
<Length> 4 </Length>
<Primary> 0 </Primary>
<Null> 0 </Null>
<Default> 0 </Default>
<DefaultValue> </DefaultValue>
</FieldInfo>
<FieldInfo>
<FieldName> name </FieldName>
<Type> 30 </Type>
<Length> 21 </Length>
<Primary> 0 </Primary>
<Null> 0 </Null>
<Default> 0 </Default>
<DefaultValue> </DefaultValue>
</FieldInfo>
<FieldInfo>
<FieldName> sal </FieldName>
<Type> 11 </Type>
<Length> 4 </Length>
<Primary> 0 </Primary>
<Null> 0 </Null>
<Default> 0 </Default>
<DefaultValue> </DefaultValue>
</FieldInfo>
</TableInfo>
</Table Information of all tables>

$ catalog -u root -p manager -d

This will gives database statistics of both system ae well as user database.For the system database it displays Max Size,First Page, Total Pages, Used Normal Pages, Used Merged Pages, Chunks Used, proc table info, Transaction table info and Lock table info etc. and in user database it displays Max Size,First Page, Total Pages, Used Normal Pages, Used Merged Pages, Chunks Used etc.

<Database Usage Statistics>
<DatabaseStatistics>
<Database Name> SYSTEMDB </Database Name>
<Max Size> 1048576 </Max Size>
<First Page> 17d84000 </First Page>
<Total Pages> 109 </Total Pages>
<Used Normal Pages> 11 </Used Normal Pages>
<Used Merged Pages> 2 </Used Merged Pages>
<Chunks Used> 13 </Chunks Used>
</DatabaseStatistics>
<ProcTable>
<UsedSlots> 1 </UsedSlots>
<FreeSlots> 99 </FreeSlots>
</ProcTable>
<TransactionTable>
<UsedSlots> 0 </UsedSlots>
<FreeSlots> 100 </FreeSlots>
<UndoLogs>
<TotalNodes> 0 </TotalNodes>
</UndoLogs>
</TransactionTable>
<LockTable>
<TotalBuckets> 2048 </TotalBuckets>
<UsedBuckets> 0 </UsedBuckets>
<TotalLockNodes> 0 </TotalLockNodes>
</LockTable>
<DatabaseStatistics>
<Database Name> userdb </Database Name>
<Max Size> 10485760 </Max Size>
<First Page> 17e7a000 </First Page>
<Total Pages> 1279 </Total Pages>
<Used Normal Pages> 2 </Used Normal Pages>
<Used Merged Pages> 0 </Used Merged Pages>
<Chunks Used> 2 </Chunks Used>
</DatabaseStatistics>
</Database Usage Statistics>

Let us insert some records into emp table.

$ csql
CSQL>insert into emp values (12,’jitu’,203.50);
Statement Executed: Rows Affected = 1
CSQL>insert into emp values (13,’papu’,451.75);
Statement Executed: Rows Affected = 1
CSQL>insert into emp values (14,’adi’,543.21);
Statement Executed: Rows Affected = 1

CSQL>quit;

$ catalog -u root -p manager -T emp

This will list Field and Index information of the table specified.It will show Table Name ,Tuple Count ,Pages Used,Space Used , Indexes ,Tuple Length ,no of Fields and index information as index name.

$ catalog -u root -p manager -T emp
<Table Info>
<TableName> emp </TableName>
<TupleCount> 3 </TupleCount>
<PagesUsed> 1 </PagesUsed>
<SpaceUsed> 136 </SpaceUsed>
<Indexes> 0 <Indexes>
<TupleLength> 36 </TupleLength>
<Fields> 3 </Fields>
<Indexes>
</Indexes>
</Table Info>

Let us create another table with a primary key .

CSQL>create table stu (roll int,name char(10),primary key(f1));
Statement Executed:

$ catalog -u root -p manager -I stu_idx1_Primary

This will list index information of the index specified.It alsi displays hash bucket info and index node info etc

<Index Info>
<IndexName> stu_idx1_Primary </IndexName>
<Unique> 1 </Unique>
<HashBucket>
<TotalPages> 1 </TotalPages>
<TotalBuckets> 1009 </TotalBuckets>
</HashBucket>
<IndexNodes>
<TotalPages> 1 </TotalPages>
<TotalNodes> 0 </TotalNodes>
<IndexNodes>
<Index Info>

$ catalog -u root -p manager -D proc

This will list process table information.It gives the information about process id,thread id ,mutex list ,trasaction list ,used and unused slot etc

<ProcTable>
<THREADINFO>
<PID> 7298 </PID>
<THRID> 0 </THRID>
<WAIT> 0 </WAIT>
<MUTEXLIST>
</MUTEXLIST>
<TRANSLIST>
</TRANSLIST>
</THREADINFO>
<UsedSlots> 1 </UsedSlots>
<FreeSlots> 99 </FreeSlots>
</ProcTable>

$ catalog -u root -p manager -D lock

This will list lock table information.It will show information like to no of bucket and total pages used for lock.

<LockTable>
<TotalUsedBuckets> 0 </TotalUsedBuckets>
<TotalPages> 1 </TotalPages>
</LockTable>

$ catalog -u root -p manager -D trans

This will list transaction table information and undo log information.

<TransactionTable>
<UsedSlots> 0 </UsedSlots>
<FreeSlots> 100 </FreeSlots>
<UndoLogs>
<TotalNodes> 0 </TotalNodes>
<TotalPages> 1 </TotalPages>
</UndoLogs>
</TransactionTable>

$ catalog -u root -p manager –ild

This give result same as -d option

<TableNames>
<TableName> t1 </TableName>
<TableName> emp </TableName>
<TableName> stu </TableName>
</TableNames>

$ catalog -u root -p manager -i

This will drop all the tables from the database.

<DropTable>
<TableName> t1 </TableName>
<TableName> emp </TableName>
<TableName> stu </TableName>
</DropTable>

$ catalog -u root -p manager -D chunk

This will show the information regarding chunk of the database. It shows chunk id ,chunk name,total no of data node, size of the data node and allocation type. Let says database is empty. The following information will display on screen.

<Chunk information>
<System Chunk >
<Chunk Id> 0 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > UserChunkTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 112 </SizeOfDataNodes>
<Allocation Type>FixedSizeAllocator</Allocation Type>
<Chunk Id> 1 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > LockTableHashBucketId </ChunkName>
<TotalDataNodes> 1 </TotalDataNodes>
<SizeOfDataNodes> 57348 </SizeOfDataNodes>
<Allocation Type>FixedSizeAllocator</Allocation Type>
<Chunk Id> 2 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > LockTableMutexId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 49156 </SizeOfDataNodes>
<Allocation Type>FixedSizeAllocator</Allocation Type>
<Chunk Id> 3 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > LockTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 20 </SizeOfDataNodes>
<Allocation Type>FixedSizeAllocator</Allocation Type>
<Chunk Id> 4 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > TransHasTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 12 </SizeOfDataNodes>
<Allocation Type>FixedSizeAllocator</Allocation Type>
<Chunk Id> 5 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > UndoLogTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 0 </SizeOfDataNodes>
<Allocation Type>VariableSizeAllocator</Allocation Type>
<Chunk Id> 10 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > DatabaseTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 148 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 11 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > UserTableId </ChunkName>
<TotalDataNodes> 1 </TotalDataNodes>
<SizeOfDataNodes> 260 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 12 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > TableTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 152 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 13 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > FieldTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 196 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 14 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > AccessTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 260 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 15 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > IndexTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 164 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 16 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > IndexFieldTableId </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 16 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
</System Chunk >
<User Chunk >
</User Chunk >
</Chunk information>

Lets create one table with primary key and one unique index. Lets see what happened in the chunk.

$ csql
CSQL> create table t2(f1 int, f2 char(20),primary key(f1));
Statement Executed
CSQL> quit;

You will see some changes in system chunk and new chunk is created in user chunk as follow.

$ catalog -u root -p manager -D chunk
<Chunk information>
<System Chunk >
<Chunk Id> 0 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > UserChunkTableId </ChunkName>
<TotalDataNodes> 3 </TotalDataNodes>
<SizeOfDataNodes> 112 </SizeOfDataNodes>

……………………
……………………
……………………

</System Chunk >
<User Chunk >
<Chunk Id> 101 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > t2 </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 36 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 102 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > t2_idx1_Primary </ChunkName>
<TotalDataNodes> 1 </TotalDataNodes>
<SizeOfDataNodes> 28256 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
<Chunk Id> 103 </Chunk Id>
<TotalPages> 1 </TotalPages>
<ChunkName > t2_idx1_Primary </ChunkName>
<TotalDataNodes> 0 </TotalDataNodes>
<SizeOfDataNodes> 16 </SizeOfDataNodes>
<Allocation Type> FixedSizeAllocator </Allocation Type>
</User Chunk >
</Chunk information>

CSQL : Postgres Configuration for Multiple bidirectional cache

For the Multiple bi-directional CSQL cache for Postgres as single data source in ,Be sure that Postgres database is installed and currently running.First create the log table in Postgres to hold the log records using the SQL statements given below using Postgres tool or isql tool.

CREATE TABLE csql_log_int(tablename varchar(64), pkid int, operation int, cacheid int);
ALTER TABLE csql_log_int add id serial;

Let us consider there are two CSQL cache node.Make changes in csql.conf file CACHE_ID = 1 for one cache node and CACHE_ID = 2 in other cache node and make sure that CACHE_TABLE,ENABLE_BIDIRECTIONAL_CACHE are set to true . Again DSN should set to psql for Postgres . Set “/etc/odbcinst.ini” and “/etc/odbc.ini” file properly.For help refer Uni-directional cache configuration.

Lets for a cached table “t” having primary key “f1” create trigger (say trigger.psql) as per following format in the Postgres database.

CREATE LANGUAGE plpgsql;
CREATE FUNCTION log_insert_t() RETURNS trigger AS $triggerinsertt$
BEGIN
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, NEW.f1, 1,1);
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, NEW.f1, 1,2);
RETURN NEW;
END;
$triggerinsertt$ LANGUAGE plpgsql;
create trigger triggerinsertt
AFTER INSERT on t
FOR EACH ROW
EXECUTE PROCEDURE log_insert_t();

CREATE FUNCTION log_update_t() RETURNS trigger AS $triggerupdatet$
BEGIN
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, OLD.f1, 2,1);
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, NEW.f1, 1,1);
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, OLD.f1, 2,2);
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, NEW.f1, 1,2);
RETURN NEW;
END;
$triggerupdatet$ LANGUAGE plpgsql;

create trigger triggerupdatet
AFTER UPDATE on t
FOR EACH ROW
EXECUTE PROCEDURE log_update_t();

CREATE FUNCTION log_delete_t() RETURNS trigger AS $triggerdeletet$
BEGIN
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, OLD.f1, 2,1);
insert into csql_log_int (tablename, pkid, operation,cacheid) values (‘t’, OLD.f1, 2,2);
RETURN NEW;
END;
$triggerdeletet$ LANGUAGE plpgsql;

create trigger triggerdeletet
AFTER DELETE on t
FOR EACH ROW
EXECUTE PROCEDURE log_delete_t();

Trigger name ends with the table name. Replace ‘t’ in the above script to the cached table name and ‘f1’ to the primary key fieldname of the cached table.

After writing the trigger file run this trigger in Postgres database .

$ psql test -f trigger.psql

In above command ,it is assumed that trigger file in current directory .