Saturday, May 16, 2015

A multi-tenant application architecture using vaadin, spring, jooq and postresql

In this tutorial we will put together a full application stack for a multi-tenant architecture.
We will be using

  • Vaadin and Vaadin-Spring for the UI
  • Postgres for storage
  • jooq for the data-layer
  • Spring-Security for authentication
  • Some glue magic to handle multi-tenancy

You can find the complete source code for this example at Github. I will not go into the details of the different technologies used, instead the focus lies on the schema setup and the implementation of the tenancy-logic.

We basically start of with the vaadin-spring-security example, the one you can find at the vaadin4spring repository.
This example gives us a basic setup for a spring based vaadin application, the is able to authenticate a user using spring security. For multi-tenancy I decided to opt for a schema-per tenant solution like this:

 

The master schema contains the table definitions for all table, tenant schemas inherit these definition to sensure we have a consistent data structure across all tenants. All database objects are managed using liquibase scripts. Liquibase is a database refactoring tool, if you never heard about it, find out for yourself at http://www.liquibase.org/ The master.tenant table holds information about the existing tenants, and their database connection properties. For each tenant schema an corresponding database user gets created by liquibase, that has restricted privileges and can only access his own tenant schema.

The master.user table for e.g. is created like this:

<changeSet author="thomas" id="master-create-table-user">
    <createTable tableName="user" schemaName="master" >
        <column name="id" type="BIGINT" autoIncrement="true"/>
        <column name="user_name" type="varchar(255)"/>
        <column name="password_hash" type="varchar(2048)"/>
        <column name="active" type="boolean" defaultValue="false"/>
    </createTable>
</changeSet>

The tenant_xx.user tables are defined as:

<changeSet author="thomas" id="${db.schema}-create-child-table-user" >
    <sql>
        create table ${db.schema}.user () INHERITS (master.user);
    </sql>
</changeSet>

Through this inheritance trick we ensure that the actual structure of the tables in all tenants keep in sync, plus as we'll see later, we are able to query across all tenants from the master schema.

The liquibase scripts in the github example create two tenants (tenant_1 and tenant_2) and one admin user each. the admin user name is for sake of simplicity the same as the tenant name, the password is simply "admin". So if you try to run the example you can log in as tenant_1/tenant_1/admin.

Now we have a basic database schema. The liquibase scripts needed some tuning to run not only on postgres but as well on hsqldb. We need this at least for unit testing, but we also want to be able to generate our jooq mapping and dsl without having to run a postgresql server on our build server.

Running the maven build creates the master schema on a temporary hsqldb instance and runs the jooq code generator afterwards. The idea for this came from http://stanislas.github.io/2014/08/23/liquibase-and-jooq.html

This is all the basic stuff we need, before we now can get into the mutli-tenant stuff. I wanted to have a clean separation of the tenants from a connection perspective. So each tenant has it's own restricted database user and it's own connection pool. This has some drawbacks, as we could probably get a quite bunch of connection pools, but the advantage is, that whatever you try to do with the tenant connection you'll get from spring, you'll never be able to touch another tenant than the one you are authenticated for. The TenantDataSource is a simple proxy wrapper that routes you to the proper connection pool based on the current spring authentication object. Here comes a snippet:

@Component@Qualifier("no-tx")
public class MultiTenantDataSource implements DataSource {

    @Autowired
    private TenantAuthentication authentication;

    @Autowired
    private TenantDao tenantDao;

    @Autowired
    private TenantHelper tenantHelper;
        
    private Map<String, DataSource> dataSourceMap = new ConcurrentHashMap<>();
    @Override
    public Connection getConnection() throws SQLException {
        DataSource ds = getDataSource();
        return ds.getConnection();
    }

    private DataSource getDataSource() {

        String tenantName = authentication.getTenant(); // will throw if not authenticated
        return  dataSourceMap.computeIfAbsent(tenantName, (key)->{
            Optional<ITenant> tenantOptional = tenantDao.findbyName(tenantName);
            return tenantOptional.map(tenant -> {
                HikariConfig config = tenantHelper.toHikariConfig(tenant);
                return new HikariDataSource(config);
            }).orElseThrow(() -> new IllegalStateException("This should never happen"));
         });
    }

The injected TenantAuthentication is a Proxy Bean around the Authentication object from the SpringSecurityContextHolder:

@Bean(name ="currentToken")
TenantAuthentication currentToken() {

    return ProxyFactory.getProxy(TenantAuthentication.class, new MethodInterceptor() {
        @Override
        public Object invoke(MethodInvocation invocation) throws Throwable {
            SecurityContext securityContext = SecurityContextHolder.getContext();
            TenantAuthenticationToken authentication = (TenantAuthenticationToken)securityContext.getAuthentication();
            if (authentication == null) {
                 throw new AuthenticationCredentialsNotFoundException("No auth..");
            }
            return invocation.getMethod().invoke(authentication, invocation.getArguments());
        }
    });
}


Wiring jooq and spring with transaction management is a little effort, and I solved it as explained in http://www.jooq.org/doc/3.6/manual/getting-started/tutorials/jooq-with-spring/

The last step we have to solve is mapping the jooq schema to the current authenticated user's schema. I solved this again with a proxy bean for the jooq DSLContext:

@Bean
public DSLContext dsl(){
    return ProxyFactory.getProxy(DSLContext.class, new MethodInterceptor() {
        Map<String, DSLContext> contextMap = new ConcurrentHashMap<>();
        @Override
        public Object invoke(MethodInvocation invocation) throws Throwable {
            String tenant = authentication.getTenant(); // will throw if not authenticated
            DSLContext ctx = contextMap.computeIfAbsent(tenant, (key) ->{ 
                Settings settings = new Settings().withRenderMapping(new RenderMapping().withSchemata(new MappedSchema().withInput("master").withOutput(key)));
                DefaultConfiguration configuration = new DefaultConfiguration();
                configuration.setSQLDialect(SQLDialect.POSTGRES_9_4);
                configuration.setSettings(settings);
                configuration.setConnectionProvider(connectionProvider());
                configuration.setExecuteListenerProvider(new DefaultExecuteListenerProvider(new ExceptionTranslator()));
                return new DefaultDSLContext(configuration);
            });
            return invocation.getMethod().invoke(ctx, invocation.getArguments());
        }
    });
}

Now we can inject this dsl into any data access object, and we are sure that we will always access the schema of the current user, using the connection that is authorized to access this schema only:

@Component
public class UserDao {
    /**
     * the dsl context.
     *
     * the context we inject here is actually a proxy bound to the current tenant schema
     * backed by a datasource that has only access to this tenant schema
     */
    @Autowired
    DSLContext dsl;

    public List<IUser> findAll() {
        return dsl.selectFrom(USER).fetchInto(User.class);
    }
}

Finally we need a form of authenticating our users. This is done straightforward using a spring AuthenticationProvider and a view that lets us read all users accross the tenant schemas:

<sql>
    create view "master"."v_user" as
    select n.nspname as tenant, u.user_name, u.password_hash 
    from master.user u left join pg_class p on u.TABLEOID = p.oid 
    left join pg_catalog.pg_namespace n on n.OID =p.relnamespace 
    where u.active=true;
</sql>

Check out the example from github if you'd like to dive into the nifty details!

Saturday, April 18, 2015

Magnolia with Blossom and Spring-Loaded

It is quite easy to use spring loaded with magnolia-blossom.
We just refresh the appCtx when reloading a Template or Area, and fire a BlossomDispatcherInitializedEvent to re-export the Template-definitions. Happy Reloading.


Sunday, April 12, 2015

ONIX for Books Product Information Message Editor

Over the last couple of weekends I tried to create a simple ONIX for Books Product Information Message xml message editor.
It is almost complete and contains links on every element to the original onix standard documentation, so one can exactly check what the onix message means with the specific entry.

It's published under the terms of the EUPL 1.1, you can go and check it out at http://eiswind.github.io/onix-editor/


Sunday, August 3, 2014

Consuming/Resolving p2 artifacts without p2

Nearly every build tool these day has some ways to create p2 repositories that can be consumed by eclipse. But I never found ways to consume p2 artifacts in non-eclipse projects, until yesterday I stumbled across the ivy updatesite resolver.

They implemented it in a simple manner, without the complex p2 stuff involved, parse the repository descriptor files and then resolve the artifacts. It is still not perfect, for e.g. I could not read the RAP repository IVY-1481, but it looks very promising. I guess this is not to hard to integrate in gradle to materialize p2 artifacts in a build.

Sunday, December 29, 2013

Running Vaadin on Virgo the Maven way

Today I tried to run Vaadin on Virgo :

I created a web bundle using the felix maven plugin, added the virgo tooling bundle nature and was able to deploy it using virgo tooling.


I still do wonder what would be the best practices to collect all dependencies of a larger project and get them deployed. How do you solve this for your production machines?