0 Comments

Read Only Views (ROV) were introduced in Pivotal 6. A ROV is basically a view in the database that is treated as a Pivotal table, so you can create forms, searches and queries associated to it. ROVs are very powerful and allow for diverse scenarios including integration with third party systems, reporting, and even BI. I will assume that you know how ROVs work in Pivotal 6 and in this article I will explain how you can use ROVs in Pivotal 5.9.

There are a lot of clients who are still using Pivotal 5.9 (either Rich Client or Windows Client) who have asked me how can they integrate external data into their CRM in an easy way. The answer so far has been to put in place a complex data synchronization scenario between the external data source and a Pivotal table. ROVs are the perfect solution for this kind of problems, where you can basically define a view over the external data (using linked servers if on MS SQLServer, or data links if on Oracle) and have it treated as a normal table in Pivotal. The problem is that ROV do not work in Pivotal 5.9, or do they?

The good news is that it is possible to have ROVs work in Pivotal 5.9, with some tricks of course. The bad news is that this is probably not supported by Aptean. Next, I will explain how to configure ROVs in your Pivotal 5.9 system.

You can trick Pivotal into using a view, thinking it is using a normal table: you just create any table using the Toolkit, and then delete that table on the database, then create a view with the same name and schema as the Pivotal table (with the Rn_ fields and all) and voila! you are using a view as a Pivotal table. This works, but the problem here is when you try to do an Apply Customization Changes (ACC) using the Pivotal Administration Console (PAC): Pivotal will look for a table and it will not find any (there is a view instead) and so it will try to create such table and here you get into trouble because you cannot have a table and a view with the same name. Result: after your first ACC you will have an inconsistent system. This does not work, we need something else.

It turns out that the software that handles the creation and maintenance of ROV is the PAC. If you have an up-to-date 5.9 system chances are that you are currently using a PAC version 6.0.8 or above (this is the one introduced with the change in the licensing schema, getting rid of the old and troublesome License Server). If you don’t have PAC with version 6.0.8 or above, then the steps below will not apply to you and you need to deal with ROV some other ways (it is still possible though).


The Table

Let’s do a simple example: suppose that we need to display a calendar for all the internal and external (ePartner) activities. We can have a table called Activites that is a union of all the contact activities (those entered by ePartner users and saved in the Contact_Activities table) and internal activities (those entered by the CRM users and saved in the Rn_Appointments table) defined in our system (a more useful example would be to use external data, but for the sake of simplicity this example will be enough). Let’s create the table in Pivotal first:

Activity table

We’ve created a table with the some of the common fields between the Rn_Appointments and the Contact_Activities tables. This is a list of those fields:

Access_Type
Activity_Complete
Activity_Complete_Date
Activity_Type
Appt_Date
Appt_Description
Due_Date
End_Time
Location
Notes
Start_Time
Time_Zone_Id

Now, do an ACC and have the PAC create the table in the database. So far, nothing new, we have a new table. Now comes the fun part. The steps that follow need to be done in the database. There are some minor differences between Oracle and SQLServer, so I will provide notes when something differs between them. When I mention that something needs to be done in the ED, in SQLServer this means that you will perform the steps in the ED database, while in Oracle this mean that you will login as the ED schema user. The same applies to the BM, when I say that something needs to be done in the BM, in SQLServer it means that you need to perform the steps in the BM database, while in Oracle it means that you need to login as the BM schema user.

In the ED, go ahead and delete the table from the database (yes, delete the table!):

drop table activity;


Configuration of the BM

We need to modify the schema of the Tables table in the BM. The BM contains the metadata for the entire Pivotal system, and the tables are defined in the Tables table. We need to add two fields to this table. In the BM, execute the following:

-- Oracle
alter table Tables add Table_Type number(3,0);
alter table Tables add Read_Only_View clob;

-- SQLServer
alter table Tables add Table_Type tinyint
alter table Tables add Read_Only_View ntext
go

We added two new fields to the table: the Table_Type, as its name suggests, is to identify the type of the table, ROVs are identified with a table type of 2. The Read_Only_View field will contain the view definition. Let’s update these two fields.

-- Oracle
update Tables set Table_Type = 2 where Table_Name = 'Activity';
commit;
-- SQLServer
update Tables set Table_Type = 2 where Table_Name = 'Activity'
go


The View Definition

Now, let’s define the view that will replace the table. The view schema has to match exactly the schema of the table, including the Rn_ fields and lengths and types. If the schemas do not match you will get a complaint from PAC when doing the ACC.

-- Oracle
select 
    a.Rn_Appointments_Id Activity_Id,
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    a.Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    a.Time_Zone Time_Zone_Id
from
    Rn_Appointments a
    
union all

select 
    cast('000A' || substr(cast(a.Contact_Activities_Id as varchar2(16)),5,12) as raw(8)) Activity_Id,    
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    null Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    null Time_Zone_Id
from
    Contact_Activities a
    
    

-- SQLServer
select 
    a.Rn_Appointments_Id Activity_Id,
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    a.Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    a.Time_Zone Time_Zone_Id
from
    Rn_Appointments a
    
union all

select 
    cast(0x000A + substring(a.Contact_Activities_Id,3,6) as binary(8)) Activity_Id,    
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    null Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    null Time_Zone_Id
from
    Contact_Activities a

You can see both version of the view for Oracle and SQLServer. There are some things to keep in mind when creating the ROV. In our example, in lines 4 and 53, the Rn_Descriptor field is cast to varchar(80) since the Table_Descriptor type is of length 80 (and the view schema and the table schema need to match). The other important thing is the formula used to calculate the Id of the new table. We want to use the same Id as the underlying tables, but one of them (the one for Contact_Activities) will have an offset in order to avoid duplication of Ids. Notice in line 27 how we calculate the offset in the case of Oracle, and in line 76 the calculation of the offset for SQLServer. At the end this is just a manipulation of Pivotal Ids.

Make sure you update the Read_Only_View field in the Tables table for the Activity table with the query for the view as defined above.

Now, do an ACC. The first time you do an ACC after you’ve configured the ROV, the PAC might propose to add the missing table and also to create the view. Don’t accept the changes yet. The PAC proposes to create the table based on the information contained in the shadow tables in the ED. Do an ACC one more time and you should now see that only the creation of the view is proposed. Accept the changes. Now you have your ROV working and integrated in your Pivotal 5.9 system, you can start creating forms, queries, define security (must be read only), etc.

There are some things that need to be taken care of in order to have a complete solution.


Deployments

Everything could not be as good. When I did my first deployment, a replicate of the BM from the development environment to the integration environment I was surprised because the changes made to the Tables table didn’t travel with the replica. If this didn’t work then I had a big problem since I didn’t want to repeat the whole procedure every time I did a deployment. So, after some digging I found the solution: the PAC creates the structure of the BM based on the information contained in the RSys_Tables and RSys_Table_Fields tables in the BM. These are what in Pivotal jargon are know as “shadow tables”. We need to update the shadow tables to reflect our newly created fields.

In the BM run the following:

-- Oracle
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from RSys_Table_Fields),
  (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
  'Table_Type',
  10,
  null
);
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 2) as raw(8)) from RSys_Table_Fields),
  (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  1,
  null
);
commit;

-- SQLServer
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
    (select max(Table_Fields_Id) + 1 from RSys_Table_Fields),
    (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
    'Table_Type',
    10,
    null
)
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
    (select max(Table_Fields_Id) + 2 from RSys_Table_Fields),
    (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
    'Read_Only_View',
    1,
    null
)
go

The above script just inserts the new field definitions in the RSys_Table_Fields. We use the maximum RSys_Table_Fields_Id and we add 1 and 2 respectively in order to create the Id of the field.

After this, when you do a backup of BM, replicate the BM from one system to another, or do an UFO, the view definitions will be maintained.


Metadata in the Toolkit

You should also update the metadata for the toolkit to keep the metadata consistent. You need to add the new BM fields, Table_Type and Read_Only_View, in the CM database. In the CM run the following:

-- Oracle
insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from Table_Fields),
  sysdate,
  sysdate,
  hextoraw('8000000000000001'),
  hextoraw('8000000000000001'),
  'Table_Type',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Simple_Choice'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Table_Type',
  'Table_Type',
  'Tables~~Table_Type'
);

insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from Table_Fields),
  sysdate,
  sysdate,
  hextoraw('8000000000000001'),
  hextoraw('8000000000000001'),
  'Read_Only_View',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Memo_50x5'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  'Read_Only_View',
  'Tables~~Read_Only_View'
);

update rsys_last_id set last_id_value = (select max(table_fields_id) from table_fields) where table_name = 'Table_Fields';

commit;


-- SQLServer
insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select max(Table_Fields_Id) + 1 from Table_Fields),
  GETDATE(),
  GETDATE(),
  0x8000000000000001,
  0x8000000000000001,
  'Table_Type',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Simple_Choice'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Table_Type',
  'Table_Type',
  'Tables~~Table_Type'
)

insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select max(Table_Fields_Id) + 1 from Table_Fields),
  GETDATE(),
  GETDATE(),
  0x8000000000000001,
  0x8000000000000001,
  'Read_Only_View',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Memo_50x5'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  'Read_Only_View',
  'Tables~~Read_Only_View'
)

update rsys_last_id set last_id_value = (select max(table_fields_id) from table_fields) where table_name = 'Table_Fields';

go



Performance

Unfortunately, we cannot forget about performance. This part of the article applies to ROVs in general, including ROVs in Pivotal 6.x.

Both SQLServer and Oracle are smart enough to use the indexes on the underlying tables for the view, so all the queries you perform against the ROV will require an index in the underlying tables. In our example is easy, since we only have two underlying tables and we can define indexes using the Toolkit. It is not that easy when you work with tables that are not in the same database server, by using linked servers in SQLServer or data links in Oracle. You need to be careful and test that the queries are performed in the source, otherwise you will have a performance problem.

There is one thing in particular that is not obvious, and this is the primary key (or what would be the primary key if it were a normal table) in the ROV. In our example, the primary key is the field Activity_Id. This is a calculated field which is not indexed. This means that if in Pivotal we try to open a form for a particular record of the ROV, the platform will query the view using the primary key, and this is not indexed, thus causing a performance problem.

To avoid this we will need to define an index on the calculated field. In our example, the Id for the Rn_Appointments table is the primary key of the table, with no formula, so no need to worry here. This is not the case for the Contact_Activities, where the Id has a formula.

Oracle supports the creation of function-based indexes, and thus the new index will be as follow:

-- Oracle
create unique index Contact_Activities_ROV1 on Contact_Activities (cast('000A'||substr(cast(Contact_Activities_Id as varchar2(16)),5,12) AS raw(8)));

SQLServer doesn’t support function-based indexes, but you can create an index on the view. The problem is that you can’t create an index if the view has a text or ntext field, and in our case we have one of those. So we need to resort to other technique. Let’s create a calculated field in the Contact_Activities table and then define an index on it:

-- SQLServer
alter table Contact_Activities add Contact_Activities_Id_ROV as cast(0x000A + substring(Contact_Activities_id,3,6) as binary(8))

create unique index Contact_Activities_ROV1 on Contact_Activities(Contact_Activities_Id_ROV)

go


Formula fieldYou need to create this field in the Toolkit as well, and then do an ACC.

The problem with this approach is that you will need to do this manually in all the environments, since Pivotal doesn’t allow to enter the above formula as part of the field definition. You only need to do this manually once for each environment. The formula should be respected by the PAC when doing successive deployments.

However, this is something I have not tested yet, so please be careful with this solution.





Considerations

The above was just an example of the many uses of ROV. The objective of this article was to explain how to use ROVs in Pivotal 5.9 and the steps required to make ROVs part of the platform.

Although, briefly explained, the use of ROVs doesn’t come for free, there are many aspects that you should consider before implementing ROVs in any version of Pivotal. Performance is usually the part that causes most of the troubles and it should be taken seriously when using big tables as the underlying data. Not considering performance might affect your system. There are a lot of good practices on the use of ROVs and I will try to describe some of them in future articles.