0 Comments

Read Only Views (ROV) were introduced in Pivotal 6. A ROV is basically a view in the database that is treated as a Pivotal table, so you can create forms, searches and queries associated to it. ROVs are very powerful and allow for diverse scenarios including integration with third party systems, reporting, and even BI. I will assume that you know how ROVs work in Pivotal 6 and in this article I will explain how you can use ROVs in Pivotal 5.9.

There are a lot of clients who are still using Pivotal 5.9 (either Rich Client or Windows Client) who have asked me how can they integrate external data into their CRM in an easy way. The answer so far has been to put in place a complex data synchronization scenario between the external data source and a Pivotal table. ROVs are the perfect solution for this kind of problems, where you can basically define a view over the external data (using linked servers if on MS SQLServer, or data links if on Oracle) and have it treated as a normal table in Pivotal. The problem is that ROV do not work in Pivotal 5.9, or do they?

The good news is that it is possible to have ROVs work in Pivotal 5.9, with some tricks of course. The bad news is that this is probably not supported by Aptean. Next, I will explain how to configure ROVs in your Pivotal 5.9 system.

You can trick Pivotal into using a view, thinking it is using a normal table: you just create any table using the Toolkit, and then delete that table on the database, then create a view with the same name and schema as the Pivotal table (with the Rn_ fields and all) and voila! you are using a view as a Pivotal table. This works, but the problem here is when you try to do an Apply Customization Changes (ACC) using the Pivotal Administration Console (PAC): Pivotal will look for a table and it will not find any (there is a view instead) and so it will try to create such table and here you get into trouble because you cannot have a table and a view with the same name. Result: after your first ACC you will have an inconsistent system. This does not work, we need something else.

It turns out that the software that handles the creation and maintenance of ROV is the PAC. If you have an up-to-date 5.9 system chances are that you are currently using a PAC version 6.0.8 or above (this is the one introduced with the change in the licensing schema, getting rid of the old and troublesome License Server). If you don’t have PAC with version 6.0.8 or above, then the steps below will not apply to you and you need to deal with ROV some other ways (it is still possible though).


The Table

Let’s do a simple example: suppose that we need to display a calendar for all the internal and external (ePartner) activities. We can have a table called Activites that is a union of all the contact activities (those entered by ePartner users and saved in the Contact_Activities table) and internal activities (those entered by the CRM users and saved in the Rn_Appointments table) defined in our system (a more useful example would be to use external data, but for the sake of simplicity this example will be enough). Let’s create the table in Pivotal first:

Activity table

We’ve created a table with the some of the common fields between the Rn_Appointments and the Contact_Activities tables. This is a list of those fields:

Access_Type
Activity_Complete
Activity_Complete_Date
Activity_Type
Appt_Date
Appt_Description
Due_Date
End_Time
Location
Notes
Start_Time
Time_Zone_Id

Now, do an ACC and have the PAC create the table in the database. So far, nothing new, we have a new table. Now comes the fun part. The steps that follow need to be done in the database. There are some minor differences between Oracle and SQLServer, so I will provide notes when something differs between them. When I mention that something needs to be done in the ED, in SQLServer this means that you will perform the steps in the ED database, while in Oracle this mean that you will login as the ED schema user. The same applies to the BM, when I say that something needs to be done in the BM, in SQLServer it means that you need to perform the steps in the BM database, while in Oracle it means that you need to login as the BM schema user.

In the ED, go ahead and delete the table from the database (yes, delete the table!):

drop table activity;


Configuration of the BM

We need to modify the schema of the Tables table in the BM. The BM contains the metadata for the entire Pivotal system, and the tables are defined in the Tables table. We need to add two fields to this table. In the BM, execute the following:

-- Oracle
alter table Tables add Table_Type number(3,0);
alter table Tables add Read_Only_View clob;

-- SQLServer
alter table Tables add Table_Type tinyint
alter table Tables add Read_Only_View ntext
go

We added two new fields to the table: the Table_Type, as its name suggests, is to identify the type of the table, ROVs are identified with a table type of 2. The Read_Only_View field will contain the view definition. Let’s update these two fields.

-- Oracle
update Tables set Table_Type = 2 where Table_Name = 'Activity';
commit;
-- SQLServer
update Tables set Table_Type = 2 where Table_Name = 'Activity'
go


The View Definition

Now, let’s define the view that will replace the table. The view schema has to match exactly the schema of the table, including the Rn_ fields and lengths and types. If the schemas do not match you will get a complaint from PAC when doing the ACC.

-- Oracle
select 
    a.Rn_Appointments_Id Activity_Id,
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    a.Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    a.Time_Zone Time_Zone_Id
from
    Rn_Appointments a
    
union all

select 
    cast('000A' || substr(cast(a.Contact_Activities_Id as varchar2(16)),5,12) as raw(8)) Activity_Id,    
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    null Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    null Time_Zone_Id
from
    Contact_Activities a
    
    

-- SQLServer
select 
    a.Rn_Appointments_Id Activity_Id,
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    a.Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    a.Time_Zone Time_Zone_Id
from
    Rn_Appointments a
    
union all

select 
    cast(0x000A + substring(a.Contact_Activities_Id,3,6) as binary(8)) Activity_Id,    
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    null Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    null Time_Zone_Id
from
    Contact_Activities a

You can see both version of the view for Oracle and SQLServer. There are some things to keep in mind when creating the ROV. In our example, in lines 4 and 53, the Rn_Descriptor field is cast to varchar(80) since the Table_Descriptor type is of length 80 (and the view schema and the table schema need to match). The other important thing is the formula used to calculate the Id of the new table. We want to use the same Id as the underlying tables, but one of them (the one for Contact_Activities) will have an offset in order to avoid duplication of Ids. Notice in line 27 how we calculate the offset in the case of Oracle, and in line 76 the calculation of the offset for SQLServer. At the end this is just a manipulation of Pivotal Ids.

Make sure you update the Read_Only_View field in the Tables table for the Activity table with the query for the view as defined above.

Now, do an ACC. The first time you do an ACC after you’ve configured the ROV, the PAC might propose to add the missing table and also to create the view. Don’t accept the changes yet. The PAC proposes to create the table based on the information contained in the shadow tables in the ED. Do an ACC one more time and you should now see that only the creation of the view is proposed. Accept the changes. Now you have your ROV working and integrated in your Pivotal 5.9 system, you can start creating forms, queries, define security (must be read only), etc.

There are some things that need to be taken care of in order to have a complete solution.


Deployments

Everything could not be as good. When I did my first deployment, a replicate of the BM from the development environment to the integration environment I was surprised because the changes made to the Tables table didn’t travel with the replica. If this didn’t work then I had a big problem since I didn’t want to repeat the whole procedure every time I did a deployment. So, after some digging I found the solution: the PAC creates the structure of the BM based on the information contained in the RSys_Tables and RSys_Table_Fields tables in the BM. These are what in Pivotal jargon are know as “shadow tables”. We need to update the shadow tables to reflect our newly created fields.

In the BM run the following:

-- Oracle
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from RSys_Table_Fields),
  (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
  'Table_Type',
  10,
  null
);
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 2) as raw(8)) from RSys_Table_Fields),
  (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  1,
  null
);
commit;

-- SQLServer
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
    (select max(Table_Fields_Id) + 1 from RSys_Table_Fields),
    (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
    'Table_Type',
    10,
    null
)
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
    (select max(Table_Fields_Id) + 2 from RSys_Table_Fields),
    (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
    'Read_Only_View',
    1,
    null
)
go

The above script just inserts the new field definitions in the RSys_Table_Fields. We use the maximum RSys_Table_Fields_Id and we add 1 and 2 respectively in order to create the Id of the field.

After this, when you do a backup of BM, replicate the BM from one system to another, or do an UFO, the view definitions will be maintained.


Metadata in the Toolkit

You should also update the metadata for the toolkit to keep the metadata consistent. You need to add the new BM fields, Table_Type and Read_Only_View, in the CM database. In the CM run the following:

-- Oracle
insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from Table_Fields),
  sysdate,
  sysdate,
  hextoraw('8000000000000001'),
  hextoraw('8000000000000001'),
  'Table_Type',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Simple_Choice'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Table_Type',
  'Table_Type',
  'Tables~~Table_Type'
);

insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from Table_Fields),
  sysdate,
  sysdate,
  hextoraw('8000000000000001'),
  hextoraw('8000000000000001'),
  'Read_Only_View',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Memo_50x5'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  'Read_Only_View',
  'Tables~~Read_Only_View'
);

update rsys_last_id set last_id_value = (select max(table_fields_id) from table_fields) where table_name = 'Table_Fields';

commit;


-- SQLServer
insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select max(Table_Fields_Id) + 1 from Table_Fields),
  GETDATE(),
  GETDATE(),
  0x8000000000000001,
  0x8000000000000001,
  'Table_Type',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Simple_Choice'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Table_Type',
  'Table_Type',
  'Tables~~Table_Type'
)

insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select max(Table_Fields_Id) + 1 from Table_Fields),
  GETDATE(),
  GETDATE(),
  0x8000000000000001,
  0x8000000000000001,
  'Read_Only_View',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Memo_50x5'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  'Read_Only_View',
  'Tables~~Read_Only_View'
)

update rsys_last_id set last_id_value = (select max(table_fields_id) from table_fields) where table_name = 'Table_Fields';

go



Performance

Unfortunately, we cannot forget about performance. This part of the article applies to ROVs in general, including ROVs in Pivotal 6.x.

Both SQLServer and Oracle are smart enough to use the indexes on the underlying tables for the view, so all the queries you perform against the ROV will require an index in the underlying tables. In our example is easy, since we only have two underlying tables and we can define indexes using the Toolkit. It is not that easy when you work with tables that are not in the same database server, by using linked servers in SQLServer or data links in Oracle. You need to be careful and test that the queries are performed in the source, otherwise you will have a performance problem.

There is one thing in particular that is not obvious, and this is the primary key (or what would be the primary key if it were a normal table) in the ROV. In our example, the primary key is the field Activity_Id. This is a calculated field which is not indexed. This means that if in Pivotal we try to open a form for a particular record of the ROV, the platform will query the view using the primary key, and this is not indexed, thus causing a performance problem.

To avoid this we will need to define an index on the calculated field. In our example, the Id for the Rn_Appointments table is the primary key of the table, with no formula, so no need to worry here. This is not the case for the Contact_Activities, where the Id has a formula.

Oracle supports the creation of function-based indexes, and thus the new index will be as follow:

-- Oracle
create unique index Contact_Activities_ROV1 on Contact_Activities (cast('000A'||substr(cast(Contact_Activities_Id as varchar2(16)),5,12) AS raw(8)));

SQLServer doesn’t support function-based indexes, but you can create an index on the view. The problem is that you can’t create an index if the view has a text or ntext field, and in our case we have one of those. So we need to resort to other technique. Let’s create a calculated field in the Contact_Activities table and then define an index on it:

-- SQLServer
alter table Contact_Activities add Contact_Activities_Id_ROV as cast(0x000A + substring(Contact_Activities_id,3,6) as binary(8))

create unique index Contact_Activities_ROV1 on Contact_Activities(Contact_Activities_Id_ROV)

go


Formula fieldYou need to create this field in the Toolkit as well, and then do an ACC.

The problem with this approach is that you will need to do this manually in all the environments, since Pivotal doesn’t allow to enter the above formula as part of the field definition. You only need to do this manually once for each environment. The formula should be respected by the PAC when doing successive deployments.

However, this is something I have not tested yet, so please be careful with this solution.





Considerations

The above was just an example of the many uses of ROV. The objective of this article was to explain how to use ROVs in Pivotal 5.9 and the steps required to make ROVs part of the platform.

Although, briefly explained, the use of ROVs doesn’t come for free, there are many aspects that you should consider before implementing ROVs in any version of Pivotal. Performance is usually the part that causes most of the troubles and it should be taken seriously when using big tables as the underlying data. Not considering performance might affect your system. There are a lot of good practices on the use of ROVs and I will try to describe some of them in future articles.

4 Comments

Pivotal stores queries using a binary format. Nobody knows (at least I don’t know anybody who does) what this binary structure is. And Pivotal doesn’t provide documentation nor does it provide any tool to extract the text from this queries. If you’ve used Pivotal for long enough you probably have whished to be able to get the text from those queries. A good example of when this could be helpful is when you make a modification to a field and would like to see what parts of the system could be affected by such change. You can query the metadata in the BM and see where the field is being used, but when it comes to queries there is no way to know this, unless of course you go through all the queries in the system and visually identify if the field is being used (ehmm.. I don’t want to do that).

I was recently working in a localization project of a Pivotal system. As part of the localization we decided to take some of the fields that were initially created as Combos and change them to Choices. (Note: Combos are saved as strings in the BD, while Choices are saved as integer and translated to string at run time, which make Choices the best data type for multi-language systems). Making this change is not easy in Pivotal, first because the Toolkit doesn’t allow it, the database doesn’t allow it either, and also because you need to change the data in the ED (I will explain how to do this automatically in a future post). Anyway, the fact is that after the change to Choice was made I had to make sure that all other objects in the BM where “aware” of the change. In the case of queries, you need to change the query (remove the field and then add it again) so it has the right metadata to treat the field in the query as string (Combo) or integer (Choice). I asked some of my coworkers how were they doing this in the past and the answer was…ehmmm… “go through all the queries one by one and see if the field is being used”. There’s no way I’m doing that! So I decided it was time to go deep in the binary format of a query.

I will first talk about the structure of the binary format for the query and then I will provide solutions for both SQLServer and Oracle. The concepts and the solutions presented here work for any version of Pivotal 5.x and 6.x.

Binary Structure of a Pivotal Query

Pivotal stores queries in a table called Saved_Lookups, which has a structure such as the following:

Saved_Lookups table structure

The actual query is saved in a field called SQL_Tree_Binary. If you query this field you will get something similar to this:

Query results for Saved_Lookups

As expected, SQL_Tree_Binary contains binary (hexadecimal) values.

First let’s talk about the table Id. This table is present in all pivotal databases: ED, BM and CM. We will focus here in queries that are created with the Toolkit for the system. We don’t want to consider those queries created in the Toolkit for the metadata (which are stored in the same table in the BM). How do we differentiate them? Using the Id (field Saved_Lookups_Id) of the record:

  • Records with an Id that starts with 0x80 are system queries
  • Records with an Id that starts with 0x00 are metadata queries

You can do this query to identify them:

-- SQLServer
select 
  saved_lookup_name,
  case substring(saved_lookups_id,1,1)
   when 0x00 then 'metadata'
   when 0x80 then 'system'
  end type
from saved_lookups

-- Oracle
select
  saved_lookup_name,
  case utl_raw.substr(saved_lookups_id,1,1)
    when hextoraw('00') then 'metadata'
    when hextoraw('80') then 'system'
  end type
from
  saved_lookups

Now, let’s talk about the binary format. There are two kind of formats: Unicode and non-Unicode formats. The binary format started as non-Unicode (single-byte) in the early days of Pivotal, but then (I don’t know exactly when) Pivotal realized that queries needed to support international characters as well and used a Unicode (double-byte) format to save the query. Your system could have them both! There are some queries that haven’t been touched/edited since they were first created which are still in non-Unicode formats. In the latest versions of Pivotal 5.x and 6.x all new queries are created in Unicode format, and those queries that are non-Unicode are changed to Unicode when modified.

The header

The type of format (Unicode and non-Unicode) can be determined by the reading the first 5 bytes:

  • 0x0100000000 : non-Unicode
  • 0x01000000FF : Unicode

Knowing whether is Unicode or not is important since we will need read the binary accordingly. The binary format has a kind of “header”, which depends on the type:

  • Header for non-Unicode queries: 0x0100000000 (length: 5)
  • Header for Unicode queries: 0x01000000FFFEFF00FFFEFF (length: 11)

As you can see the header size varies depending on whether is Unicode or not.

The length

After the header we have the length of bytes we need to read to get the text of the query. Again here we have two flavors: when the length can be stored in a single byte (i.e. the length is less than 255) then the length is stored in that same byte. If the length cannot be stored in a single byte then we will find 0xFF and then two bytes for the length. This is:

  • 0xXX: when the length is less than 255
  • 0xFFYYZZ: when the length is greater than or equal to 255

Here XX or YYZZ is a binary representation of an integer value for the length. But, there is another caveat (why make it easy?) In the case when the length is represented with 2 bytes, those bytes are stored as little-endian, which means that to get the integer value you need to invert YYZZ to ZZYY.

So for example, after reading the header (which can be 5 or 11 bytes depending on whether is Unicode or not) we need to read 1 or 3 more bytes, depending on whether the first byte is 0xFF or not,  to get the length. In case we read 1 byte we convert this to integer. If we read 3 bytes we use the second and third bytes inverted and convert that to integer. So we end up with this decision tree:

  • If it is Unicode, then skip 11 bytes for the header. For the length
    • If the 12th byte is 0xFF then read bytes 14th and 13th (inverted) and convert them to integer
    • If the 12th byte is not 0xFF then convert the 12th byte to integer.
  • If it is non-Unicode, then skip 5 bytes for the header. For the length
    • If the 6th byte is 0xFF then read bytes 8th and 7th (inverted) and convert them to integer
    • If the 6th byte is not 0xFF then convert the 7th byte to integer.

The start index

Lastly, we need to know where to start reading our text. As you might have guessed, it will depend on the above conditions. So we have the following:

  • If it is Unicode, then skip 11 bytes for the header. For the length
    • If the 12th byte is 0xFF we skip 3 bytes so our text starts at byte 15th
    • If the 12th byte is not 0xFF we skip 1 byte so our text starts at byte 13th
  • If it is non-Unicode, then skip 5 bytes for the header. For the length
    • If the 6th byte is 0xFF we skip 3 bytes so our text starts at byte 9th
    • If the 6th byte is not 0xFF we skip 1 byte so our text starts at byte 7th


Some examples:

Example 1: SQL_Tree_Binary = 0x01000000FFFEFF00FFFEFFFF7B042800….

The first 5 bytes are 0x01000000FF, so it is unicode. We skip 11 bytes to get the length. Byte 12th is 0xFF, meaning that we need bytes 13th and 14th to get the length. These bytes are 0x7B04. To convert them to integer we invert them to 0x047B. So the length is 1147. And the query starts at byte 15th.


Example 2: SQL_Tree_Binary = 0x01000000FFFEFF00FFFEFF352800….

The first 5 bytes are 0x01000000FF, so it is unicode. We skip 11 bytes to get the length. Byte 12th is not 0xFF, meaning that byte 12th has the length. We convert byte 0x35 to integer and we get that the length is 53. And the query starts at byte 13th.


Example 3: SQL_Tree_Binary = 0x0100000000FF5F052820….

The first 5 bytes are 0x0100000000, so it is not unicode. We skip 5 bytes to get the length. Byte 6th is 0xFF, meaning that we need bytes 7th and 8th to get the length. These bytes are 0x5F05. To convert them to integer we invert them to 0x055F. So the length is 1375. And the query starts at byte 9th.


Example 4: SQL_Tree_Binary = 0x0100000000F12820….

The first 5 bytes are 0x0100000000, so it is not unicode. We skip 5 bytes to get the length. Byte 6th is not 0xFF, meaning that byte 6th has the length. We convert byte 0xF1 to integer and we get that the length is 241. And the query starts at byte 7th.


This is not the whole story on the binary format of the query, but for now it is enough to get us what we want.


Getting the Query Text

Ok, now that we understand the structure is very simple to get the text (or is it?). From our analysis above we get two important pieces of information:

  • startIndex: where we can find the text of the query in all the binary
  • length: the length of bytes we need to consider to get the text

All is left is to get a substring of the binary value inside SQL_Tree_Binary field, starting at the startIndex and taking length bytes. In the case of a Unicode query we need to take length * 2 bytes (because Unicode is double-byte, remember?).

Both SQLServer and Oracle allow you to convert a binary/raw (respectively) datatype to varchar or nvarchar. I will explain how to do this later, but first let me point something to get a cleaner text:

The binary format used to represent the query is more complex than what’s explained here. The text of the query has two parts:

  • The first part is the text of the query as shown in the Toolkit in the query editor when we click the “Show SQL” button
  • The second part is a more structured form of the query, with comments for each of the clauses in the query editor.

As an example, take this query:

Sample query

here is the extracted text:

( 
    (
        EXISTS 
        (
            SELECT * 
            FROM   Company__Type 
            WHERE 
                Company__Type.Company_Type  = 5 AND 
                (Company__Type.Company_Id = Company.Company_Id)
        )
    ) AND 
    (Company.Inactive  = 0)
) 

WhereDelim,..,WhereDelim 

( 
    (
        /* Company  Type(Company).Company Type = Partner */  
        EXISTS 
        (
            SELECT * 
            FROM 
            (
                SELECT DISTINCT Company__Type.Company_Id   
                FROM   Company__Type   
                WHERE Company__Type.Company_Type  = 5
            ) SubqueryView   
            WHERE SubqueryView.Company_Id=Company.Company_Id
        )
    )   AND 
    (Company.Inactive  = 0)
)

(indentation and new lines added by me). Notice how the two parts are clearly separated by a special type of delimiter with the text WhereDelim,..,WhereDelim. The second part is closer to what Pivotal will actually send to the database, and it has also comments that correspond to the lines in the query designer, so it is easier to identify. We will only take the second part as the text of our query.


Extracting the Query Text in SQLServer

So, in SQLServer we can create the following view:

create view saved_lookups_text as
select
    table_id,
    table_name,
    saved_lookups_id,
    saved_lookup_name,    
    case
      when charindex('WhereDelim',query_text) > 0 then 
        substring(query_text,len(query_text) - charindex(reverse('WhereDelim'),reverse(query_text)) + 3,4000)
      else query_text
    end query_text
from (
    select
      table_id,
      table_name,
      saved_lookups_id,
      saved_lookup_name,
      case isUnicode
        when 1 then cast(substring(sql_tree_binary, startIndex, queryLength * 2) as nvarchar(4000))
        else cast(substring(sql_tree_binary, startIndex, queryLength) as varchar(8000))
      end query_text
    from (
        select
            saved_lookup_name, 
            case 
                when substring(sql_tree_binary,1,5) = 0x0100000000 then 0
                when substring(sql_tree_binary,1,5) = 0x01000000FF then 1
            end isUnicode,
            cast(case 
                when substring(sql_tree_binary,1,5) = 0x0100000000 then 
                    case 
                        when substring(sql_tree_binary,6,1) = 0xFF then substring(sql_tree_binary,8,1) + substring(sql_tree_binary,7,1)
                        else substring(sql_tree_binary,6,1)
                    end
                when substring(sql_tree_binary,1,5) = 0x01000000FF then 
                    case 
                        when substring(sql_tree_binary,12,1) = 0xFF then substring(sql_tree_binary,14,1) + substring(sql_tree_binary,13,1)
                        else substring(sql_tree_binary,12,1)
                    end
            end as int) queryLength,
            case 
                when substring(sql_tree_binary,1,5) = 0x0100000000 then 
                    case 
                        when substring(sql_tree_binary,6,1) = 0xFF then 9
                        else 7
                    end
                when substring(sql_tree_binary,1,5) = 0x01000000FF then 
                    case 
                        when substring(sql_tree_binary,12,1) = 0xFF then 15
                        else 13
                    end
            end    startIndex,    
            s.table_id,
            t.table_name,
            s.saved_lookups_id,   
            s.sql_tree_binary
        from 
          saved_lookups s inner join 
          tables t on s.table_id = t.tables_id
        where substring(saved_lookups_id,1,1) <> 0x00
    ) x
) y

(I know the query could be better written, but for the sake of demonstration is ok)

This query will present something like this (notice the query_text field):

Results from view with query text


Notice lines 8 and 9, where, due to the lack of a reverse charindex function in SQLServer, we need to do a trick to find the last occurrence of WhereDelim text and then take the text that comes after this. Notice also in lines 19 and 20 how we cast the text to the appropriate charset based on whether the text is Unicode or not.

You can download this SQL script from here.


Extracting the Query Text in Oracle

You guessed it! In Oracle things are not as easy. There are some caveats we need to consider:

  • Oracle has a limitation of 2000 characters when using nvarchar2 and 4000 characters when using varchar2 data types. So if our text is longer than this we will get an error when converting things using the package UTL_RAW.
  • The package used to read blobs in Oracle is DBMS_LOB. This package has a convenient SUBSTR function to get pieces of the BLOB (in our case the data in the SQL_Tree_Binary field). This function changes the endianness of the data when the data is Unicode (in other words, it inverts it) and thus when converting the Unicode text using UTL_RAW this gets confused and we get funky letters instead.
  • You cannot combine a Unicode charset with a non-Unicode charset in the same field using CASE statements

The view in Oracle is like this:

create view saved_lookups_text as
select
  saved_lookups_id,
  saved_lookup_name,
  case when unicode = 1
    then reverse(cast(utl_raw.cast_to_nvarchar2(utl_raw.reverse(dbms_lob.substr(sql_tree_binary,case when length > 1000 then 2000 else length * 2 end,queryStartIndex))) as varchar2(4000)))
    else utl_raw.cast_to_varchar2(dbms_lob.substr(sql_tree_binary,case when length > 2000 then 2000 else length end,queryStartIndex))
  end queryText,
  table_id,
  table_name
from
(
  select
    saved_lookup_name,
    dbms_lob.substr(sql_tree_binary,5,1) queryType,
    case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then dbms_lob.substr(sql_tree_binary,5,1)
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then dbms_lob.substr(sql_tree_binary,11,1)
    end prefix,
    case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then 0
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then 1
    end unicode,
    utl_raw.cast_to_binary_integer(case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then
           case
             when dbms_lob.substr(sql_tree_binary,1,6) = hextoraw('FF') then utl_raw.concat(dbms_lob.substr(sql_tree_binary,1,8),dbms_lob.substr(sql_tree_binary,1,7))
             else dbms_lob.substr(sql_tree_binary,1,6)
           end
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then
           case
             when dbms_lob.substr(sql_tree_binary,1,12) = hextoraw('FF') then utl_raw.concat(dbms_lob.substr(sql_tree_binary,1,14),dbms_lob.substr(sql_tree_binary,1,13))
             else dbms_lob.substr(sql_tree_binary,1,12)
           end
    end) length,
    case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then
           case
             when dbms_lob.substr(sql_tree_binary,1,6) = hextoraw('FF') then 9
             else 7
           end
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then
           case
             when dbms_lob.substr(sql_tree_binary,1,12) = hextoraw('FF') then 15
             else 13
           end
    end queryStartIndex,
    s.table_id,
    t.table_name,
    s.saved_lookups_id,
    s.sql_tree_binary
  from saved_lookups s inner join tables t on s.table_id = t.tables_id
  where
    utl_raw.substr(saved_lookups_id,1,1) = hextoraw('80') -- only business queries
  order by table_id,s.saved_lookup_name
)

Notice in line 6 how I do some tricks to overcome all the problems mentioned above. This query shows both parts of the query (doesn’t uses the WhereDelim mentioned above) but it is easy to add. Also, this view doesn’t include queries which text length is more than 4000 characters if non-Unicode or 2000 characters if Unicode, which unfortunately leaves a lot of query in the dark.

In Oracle I have created a package instead that gives me what I want in the query output window. With this package you can do things such as:

-- use a procedure to get the text of the query
declare
    queryText varchar2(8000);
begin
  pk_pivotal_toolkit.getQueryText('<some query name>',queryText);
  pk_pivotal_toolkit.printText(queryText);
end;

-- use a function to get the text of the query
declare
  queryText varchar2(8000);
begin
  queryText := pk_pivotal_toolkit.getQueryText('<some query name>');
  pk_pivotal_toolkit.printText(queryText);
end;

-- this will show only the first 4000 chars of a query
select s.*,pk_pivotal_toolkit.getQueryText(saved_lookup_name) query_text from saved_lookups s
where utl_raw.substr(saved_lookups_id,1,1) = hextoraw('80')


-- see what queries use a specific field:
call pk_pivotal_toolkit.findFieldsInQuery('Company.Type, Contact.First_Name');

You can download the whole package from here.


Conclusion

Now you can know what queries make use of certain field in the database!

I hope you find this useful. If you do please let me know. Also if you know a better way of doing this let me know. And if somebody from Pivotal reads this, please give us a better way of knowing how to get the text for a query.

1 Comments

In a previous post (see How does Pivotal sends emails?) I explained the two methods Pivotal has to send an email: via SMTP or using the email client, depending on whether the email is being sent client-side or server-side and the SMTP options configured for system.

In this article I will provide more information on how to configure Pivotal to always send emails using SMTP, on the client and on the server side. It turns out that when you use the Pivotal Configuration Utility tool, when you set the server to use SMTP, the following information gets written on the registry:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Pivotal\Email]
"Protocol"="SMTP"
"Server"="smtp.company.com"
"From"="someemail@company.com"
"Reply To"="someemail@company.com"
"Username"=""
"Password"=""

(the key above is for x64 OS. If configuring an x86 OS then the key is [HKEY_LOCAL_MACHINE\SOFTWARE\Pivotal\Email])

You can just create a .reg file with the text above and write this info on any machine where Pivotal runs, and emails will be sent using SMTP. You can also create the above registry entries in a client machine and have the client send SMTP emails as well!

Ah, and this also works for Pivotal 5.9 server side!

So, stop using the Outlook profile to send emails and use SMTP instead.

3 Comments

Pivotal CRM has an application for the iPad and for the iPhone. Both applications talk to a middle tier called Pivotal Device Server. This middle tier is the one responsible of accessing the information on the Pivotal Business Server. The Device Server exposes REST services which are consumed by these applications.

At the moment, the Device Server is only used by these two iOS applications. However, having a REST endpoint to our Pivotal server is something that could be leveraged in other applications. For example, I could have a HTML5 web site that reads/writes to our Pivotal backend using the REST interface. I could also write any mobile app I like (e.g. for Windows Phone or Android) to access the information. In other words, I could use the REST endpoint for anything I like.

The problem is, there is no documentation (or at least I haven’t found any) on how to use the Device Server. How can I use the REST endpoint if I don’t know the services exposed, the lifecycle, and the parameters required? Well, after some digging I’ve found some information that will allows us to use the Device Server as a middle tier for any application we want. As an example, I provide a simple console application that gets a list of contacts and then gets the data for one particular contact (you can extrapolate this to more useful and fun applications such as a MVC HTML5 app or a mobile cross platform app for any of the phones out there).

Note: I don’t know how the licensing works on this. You should check with your Pivotal provider to see how you can license this service to be used as a middle tier for any non-Pivotal application. This is just a technical article on how to do things, and as with everything, this doesn’t mean you can do them.

After the little disclaimer, let’s get to work.

Discovering the Service Metadata

Services created with Microsoft WCF can display a help page to show a little of documentation about the services exposed. This can be disabled, but fortunately enough, the developers didn’t disable this. You can get the metadata for the web service by appending /help to your service endpoint.

All the Device Server service metadata should be exposed by a call to http://<server>/PivotalDeviceServer/App/HelpAll (and I knew this after I did the actions below), but this throws an error.

So, how can I found what services are being used if there is no documentation? So our first step is to find out what services are exposed. Since all the communication between the devices and the device server is through HTTP, the easiest thing to do is to put something in the middle to allow us to see what information is being exchanged. I used Fiddler to do this. I configured Fiddler as a reverse proxy and then I configured the URL on the Pivotal CRM iOS application (either iPad or iPhone) to point to Fiddler instead of the actual server.

Pivotal CRM for iPad/iPhone login screenIn the image we can see the Pivotal CRM iPhone/iPad login screen. We can specify our own URL for the “Server URL” field". In this case we are pointing to the server where the Device Server is, but we are using the default Fiddler port 8888. Fiddler is installed on the same machine where the Device Server is installed, so it will then forward the request to the Device Server, but in the middle will show us what the request looks like. This will allow us to see what URLs are being called by the application, thus exposing the services endpoints.


After playing around with the app I found that the Device Server exposes four services:

  • App
  • User
  • Data
  • Metadata

If we navigate to the service metadata page (by appending the /help to the end of the url for the service) for each one of these services we will get a list of all the methods exposed by the service. The image shows the metadata for the Data service:

dataServiceMetadata

After doing this for each service, I got the following list:

/Data/Attachment/ByDataFormId/{dataformid}/{recordid}/{fieldid}
/Data/Attachment/ByDataFormName/{dataformname}/{recordid}/{fieldid}
/Data/ChoiceLists/ById/{tableid}?choicetype={choicetype}
/Data/ChoiceLists/ByName/{tablename}?choicetype={choicetype}
/Data/Command/CommandType/{commandtype}/CommandId/{commandid}/?returnSchema={returnSchema}
/Data/DataFormId/ByTableId/{tableid}/ByRecordId/{recordid}
/Data/Image/ById/{imageId}
/Data/Image/ByName/{imageName}
/Data/Images
/Data/Graph/ByGraphId/{graphId}/BySearchId/{searchId}
/Data/Graph/ByGraphName/{graphName}/BySearchId/{searchId}
/Data/ListDataForField/ByDataFormId/{dataformid}/{recordid}/{listformfieldid}
/Data/RecordAdd/ByDataFormId/{dataformid}?returnSchema={returnSchema}
/Data/RecordAdd/ByDataFormName/{dataformname}?returnSchema={returnSchema}
/Data/Record/ByDataFormId/{dataformid}/{recordid}?returnSchema={returnSchema}
/Data/Record/ByDataFormName/{dataformname}/{recordid}?returnSchema={returnSchema}
/Data/RecordCancel/ByDataFormId/{dataformid}/{recordid}
/Data/RecordDelete/ByDataFormId/{dataformid}/{recordid}
/Data/RecordDelete/ByDataFormName/{dataformname}/{recordid}
/Data/RecordLink/ByDataFormId/{parentdataformid}/{parentrecordid}/{linktableid}/{linkrecordid}?returnSchema={returnSchema}
/Data/RecordNew/ByDataFormId/{dataformid}?returnSchema={returnSchema}
/Data/RecordNew/ByDataFormName/{dataformname}?returnSchema={returnSchema}
/Data/RecordSecondary/ByDataFormId/{dataformid}/{recordid}/{secondarygroupid}?returnSchema={returnSchema}
/Data/RecordUnlink/ByDataFormId/{parentdataformid}/{parentrecordid}/{unlinktableid}/{unlinkrecordid}
/Data/RecordUpdate/ByDataFormId/{dataformid}/{recordid}?returnSchema={returnSchema}
/Data/RecordUpdate/ByDataFormName/{dataformname}/{recordid}?returnSchema={returnSchema}
/Data/Search/BySearchId/{searchId}?criteria={criteria}&returnSearchResultsListDef={returnSearchResultsListDef}&searchResultsListId={searchResultsListId}
/Data/Search/BySearchName/{searchName}?criteria={criteria}&returnSearchResultsListDef={returnSearchResultsListDef}&searchResultsListId={searchResultsListId}
/Data/Search/ByTableId/{tableId}?criteria={criteria}
/Data/Search/ByTableName/{tableName}?criteria={criteria}
/Data/SearchCount/BySearchId/{searchId}?criteria={criteria}
/Data/SearchCount/BySearchName/{searchName}?criteria={criteria}
/Data/SearchCount/ByTableId/{tableId}?criteria={criteria}
/Data/SearchCount/ByTableName/{tableName}?criteria={criteria}

/App
/App/HelpAll

/Metadata/AppLayout/BusinessObject
/Metadata/AppLayout/Dashboard
/Metadata/AppLayout/DataForm/ByTableName/{table}
/Metadata/AppLayout/All
/Metadata/AppLayout/Images
/Metadata/AppLayout/SearchList/ByTableName/{table}
/Metadata/AppLayout/Taskpads

/User/Login
/User/Logout
/User/Permissions

Ok, now I have some documentation. Next thing is how to use these methods.

Authentication

The first thing we need to do is login with the Device Server. If we use Fiddler, and login using the iPad/iPhone application, we will notice the following calls:

Device Server requests

The first call is the most important. If we inspect the request and the response, we will see something like this:

------------------------ Request ------------------------
POST http://windows2008/PivotalDeviceServer/User/Login HTTP/1.1
Host: windows2008
X-Titanium-Id: a406e9f4-38bf-40ff-b6ed-ed1c17ee11eb
X-Requested-With: XMLHttpRequest
Accept-Encoding: gzip
Content-Type: application/json
Content-Length: 737
Connection: close
Cookie: ASP.NET_SessionId=phfvgrdugwvhyiu1gah54kzy
User-Agent: Appcelerator Titanium/3.1.3.GA (iPad/7.1.1; iPhone OS; en_ES;)

{
    "AppInstallationId": "32C74B8E-C640-43D3-8556-64D29065C872",
    "Domain": "",
    "Id": "PCS",
    "Password": "%27%3C%C3%8D%03yOy%C3%96%C2%96",
    "AppVersion": "6.0.5.0.4.2",
    "SecurityGroups": {
        "SecurityGroups": [{
            "Id": "0x8000000000000002",
            "Name": "ContactManagementAdministrator"
        },
        {
            "Id": "0x8000000000000003",
            "Name": "ContactManagementSuperUser"
        },
        {
            "Id": "0x8000000000000004",
            "Name": "ContactManagementUser"
        }]
    },
    "AuthenticatedToken": null,
    "ForceLogon": false
}


------------------------ Response ------------------------
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Thu, 08 May 2014 11:13:04 GMT
Connection: close
Content-Length: 1659

{
    "AppSchemaVersion": "1.0.3",
    "StatusCode": "Success",
    "CRMSystemName": "CRM",
    "DaysToKeepInHistory": 20,
    "DeviceServerVersion": "6.0.4.16",
    "DisableAutoLoginAfterIdleTimeout": "false",
    "DisableAutoLoginOnAppStart": "false",
    "DisableCachedMeta": "true",
    "DisableOffline": "false",
    "DoNotRememberUsernameOnAppStart": "false",
    "DoNotRunQuickSearchImmediately": "false",
    "ImagesChecksum": "817575750",
    "MetaDataSchemaVersion": "06.13",
    "MetaDataVersion": "826",
    "Permissions": {
        "SecurityGroups": [{
            "Id": "0x8000000000000002",
            "Name": "Contact Management Administrator"
        },
        {
            "Id": "0x8000000000000003",
            "Name": "Contact Management Super User"
        },
        {
            "Id": "0x8000000000000004",
            "Name": "Contact Management User"
        }],
        "TablePermissions": [{
            "CanDelete": true,
            "CanInsert": true,
            "CanQuickSearch": true,
            "CanRead": true,
            "CanUpdate": true,
            "Table": "Contact",
            "TableId": "0x800000000000003D"
        }]
    },
    "SecurityGroupsModified": "false",
    "StatusText": null,
    "SystemGlobalId": "17f65970-9322-47e7-b112-e424af3a430e",
    "SystemServerVersion": "6.0.13.16",
    "SystemType": "Pivotal",
    "UserToken": "4c8925a3753392da383f229f4ef4706ecb0ad9d3f8fcab016301e7f1fb2e7b2e8b8c171c4001dd6bcbba0057bd63b131ad744a286224d742b42788537063ea715e68407fc3062a165436bd6b95a36143818b9ed562b8833351b95a2f9aa68b6a5d64a74d4b9634c3ee00c1e374a0520ed35a999fcdc3df594d88817d495e5102"
}

The important information is highlighted. On the request we see that we need to specify a username and a password to authenticate with the Device Server. On the response we see that the Device Server returns an authentication token. This token will be used for all the subsequent requests to the Device Server.

The tricky part here is how to encrypt the password. The encryption used is some kind of the ARCFOUR algorithm. Here is an utility to encrypt the password:

using System;
using System.Text;
using System.Web;

namespace Pivotal
{
    public static class Util
    {
        public static string EncryptString(string valueToEncrypt)
        {
            if (string.IsNullOrWhiteSpace(valueToEncrypt))
            {
                return valueToEncrypt;
            }
            return Uri.EscapeDataString(RC4Encrypt("rzPeOQapG67sJfUqATJOfzAl74JXzyFdo0xTNYwVh", valueToEncrypt));
        }
        
        private static string RC4Encrypt(string key, string pt)
        {
            int num;
            int[] numArray = new int[256];
            for (int i = 0; i < 256; i++)
            {
                numArray[i] = i;
            }
            int num1 = 0;
            for (int j = 0; j < 256; j++)
            {
                int num2 = key[j % key.Length];
                num1 = (num1 + numArray[j] + num2) % 256;
                num = numArray[j];
                numArray[j] = numArray[num1];
                numArray[num1] = num;
            }
            int num3 = 0;
            num1 = 0;
            StringBuilder stringBuilder = new StringBuilder();
            for (int k = 0; k < pt.Length; k++)
            {
                num3 = (num3 + 1) % 256;
                num1 = (num1 + numArray[num3]) % 256;
                num = numArray[num3];
                numArray[num3] = numArray[num1];
                numArray[num1] = num;
                int num4 = pt[k];
                stringBuilder.Append(Convert.ToChar(num4 ^ numArray[(numArray[num3] + numArray[num1]) % 256]));
            }
            return stringBuilder.ToString();
        }
    }
}

With this we can authenticate with the server. Let’s create a simple console application to test the authentication. Since we are working with REST and JSON we will use the RestSharp and Json.Net libraries from NuGet.

static void Main(string[] args)
{
    string user = args[0];
    string password = args[1];
    
    password = Util.EncryptString(password));

    var client = new RestClient("http://windows2008/PivotalDeviceServer");
    
    // get the authentication token
    var request = new RestRequest("User/Login", Method.POST) { RequestFormat = DataFormat.Json };
    request.AddHeader("Accept", "application/json");
    request.AddBody(new
    {
        Id = user,
        Password = password
    });

    var response = client.Execute(request);
    JObject resultResponse = JObject.Parse(Util.DecodeJson(response.Content));
    string authenticationToken = (string)resultResponse["UserToken"];
}

We are reading the username and password from the command line (as mentioned above, you can extrapolate this to do more fun applications, this is just a simple example to explain things). In line 6, we encrypt the password using the utility presented above, and then, on line 16 and 17, we build a JSON body using RestSharp. Notice that we only care about the username and password, any other parameters we can skip. We then use Json.Net to parse the response and get the authentication token, in line 21, identified by the parameter UserToken. Notice also that we use an utility to decode the JSON response returned by the Device Server.

Not sure why (maybe is an iOS thing) but the JSON response has a lot of escape characters which are not standard JSON format. In order to get a valid JSON from the response, we need to trim all this escape chars. For this we use another utility method:

public static string DecodeJson(string json)
{
    json = json.Replace("\\\\\\\"","'").Replace("\\\"", "\"").Replace("\\\\/","/");
    return json.Substring(1,json.Length - 2);
}

We will use this utility each time we get a response back from the Device Server to format the returned JSON. If we don’t do this we will not be able to parse the JSON using Json.Net

Now that we have an authentication token, the next step is start using the API to work with the Device Server and the Pivotal backend. For our example we will get a list of contacts and then get the information from one of them. To get a list of records for a specific table we can use the method Data/Search/ByTableName/{tableName}. You can use Fiddler to see how to call this method. To save space, I won’t put the request and response output from Fiddler here, but instead I will show how to call this method from our sample program:

// get the list of contacts
request = new RestRequest("Data/Search/ByTableName/{tableName}", Method.POST) { RequestFormat = DataFormat.Json };
request.AddHeader("Accept", "application/json");
request.AddUrlSegment("tableName", "Contact");
request.AddBody(new
{
    AuthenticatedToken = authenticationToken
});
response = client.Execute(request);
resultResponse = JObject.Parse(JsonUtils.DecodeJson(response.Content));
JArray results = (JArray)resultResponse["SearchResult"];
List<Contact> records = results.Select(r => 
    {
        JArray fields = (JArray)r;
        int idFieldPosition = fields.Count - 1;
        return new Contact() 
        { 
            RnDescriptor = (string)fields[idFieldPosition]["Descriptor"],
            Id = (string)fields[idFieldPosition]["Value"]
        };
    }).ToList();

Notice, in line 4, how we specify the table name to be Contact. And in line 7, we use the authentication token we got from the first call and append this to the body of our request. I’m using a POCO object called Contact to store the information. This class is as follows:

public class PivotalRecord
{
    public string Id { get; set; }
    public string RnDescriptor { get; set; }
    public DateTime RnEditDate { get; set; }
    public DateTime RnCreateDate { get; set; }
    public string RnEditUser { get; set; }
    public string RnCreateUser { get; set; }

    public override string ToString()
    {
        return String.Format("{0} [{1}]",RnDescriptor, Id);
    }
}

public class Contact : PivotalRecord
{
    public string FirstName { get; set; }
    public string LastName { get; set; }        
}

We now have a list of all the contact Ids. Next, let’s see how we can get the data form a specific record. For this we will use the method Data/Record/ByDataFormName/{formName}/{recordId}

Contact record = records[0];
request = new RestRequest("Data/Record/ByDataFormName/{formName}/{recordId}?returnSchema=false", Method.POST) { RequestFormat = DataFormat.Json };
request.AddHeader("Accept", "application/json");
request.AddUrlSegment("formName", "Contact - Device");
request.AddUrlSegment("recordId", record.Id);
request.AddBody(new
{
    AuthenticatedToken = authenticationToken
});
response = client.Execute(request);                
resultResponse = JObject.Parse(JsonUtils.DecodeJson(response.Content));

record.RnDescriptor = (string)resultResponse["RecordDescriptor"];
record.RnEditDate = DateTime.Parse((string)resultResponse["RecordEditDate"]);
record.RnCreateDate = DateTime.Parse((string)resultResponse["RecordCreateDate"]);
record.RnCreateUser = (string)resultResponse["RecordCreateUserId"];
record.RnEditUser = (string)resultResponse["RecordEditUserId"];

JArray fieldList = (JArray)resultResponse["FormDataTableList"][0]["FormDataRowList"][0]["FormDataFieldList"];
record.FirstName = (string)fieldList[0]["Value"];
record.LastName = (string)fieldList[1]["Value"];

Notice again in line 8 how we specify the authentication token to be part of the body. The rest is just parsing the returned JSON to extract the information we need on the contact. Unfortunately, using the deserializer from Json.Net is not straighforward to do due to the way the JSON is constructed by the Device Server (it is a generic format used to represent any object).

And this is it. You can now use the Device Server as a REST endpoint to your Pivotal backend from any of your applications.

0 Comments

Every module in Pivotal is also known for its acronym. The most common modules, the base modules, are the Pivotal Customer Management (PCM) and the Customer Management System (CMS). These are probably the most known modules, but there are many others. There is no documentation or list that identifies all the acronyms, so I decided to summarize them here.

For developers it is very important to understand these acronyms, and also, to understand the effect that the installation of a module has on a development machine. So for example, a module installation creates an environment variable that points to the path where all the assemblies for such module are, which is relevant if we want to use Visual Studio to extend the module. I will explain this later in the article, but first let me explain all the modules available for a Pivotal system.

Pivotal modules

There are two possible base modules, the PCM and the CMS. The CMS is the old (5.x) base module containing Sales, Support and Marketing all in one package. Pivotal released PCM as the new base module, as a simpler and more elegant module for the pivotal 6.x version of the product. The following lists summarizes all the modules and their acronyms:

PCM

ModuleAcronymVersion
Pivotal Customer ManagementPCM6.0.8 HF2
Application WorkflowAWM6.0.9
Contact CenterCC6.0.9
Customer Service and SupportCSS6.0.8
Data ManagementDMM6.0.9
Document ManagementPDM6.0.9
eMarketingEM6.0.10
Marketing Resource ManagementMRM6.0.9
Sales Force AutomationSFA6.0.8 HF1
Social CRMSCRM6.0.10 HF2
Territory ManagementTM6.0.11
Web PortalWP6.0.11 HF1


CMS

ModuleAcronymVersion
Customer Management SystemCMS6.0.5
Application WorkflowAWM6.0
Contact CenterCC6.0
Data ManagementDMM6.0.7
Document ManagementPDM6.0
eMarketingEM6.0.4 HF5
Marketing Resource ManagementMRM6.0
Sales Force AutomationSFA6.0.8 HF1
Social CRMSCRM6.0


Notice that the CMS track doesn’t contain the Customer Service and Support, Sales Force Automation and Territory Management, but this is because, the CMS already contains these modules in the core. The only module that is not yet released for CMS is the Web Portal one (relevant if you use Thin Client). It is true that both PCM and CMS, although very similar, are not functionally equivalent, there might be some features in PCM that are not present in CMS (e.g. the integration with mapping services such as Google maps or Bing maps).

Environment variables

When you install a Pivotal module on a development machine, the following things occur:

  • The module is installed by default in C:\Program Files[ (x86)]\CDC Software\Pivotal CRM\Applications
  • Each module will create an Assemblies folder where all the .Net assemblies that implement the module will be placed. You will need these assemblies in case you want to extend the module and add references in Visual Studio.
  • A system environment variable is created. This variable is created by the acronym of the module concatenated with the word Path at the end. So for example, if you install the SFA module an environment variable with the name SFAPath will be created. This variable will point to the path where the assemblies for such module are installed. So for example, the SFAPath variable will point to C:\Program Files[ (x86)]\CDC Software\Pivotal CRM\Applications\Sales Force Automation\Assemblies

In addition to the environment variables created by each module, there are some generic environment variables that you need to consider as well. These variables are automatically created when you install the PBS, Pivotal Client and Smart Client Framework. The following is the complete list of such environment variables:

ComponentEnvironment Variable
Pivotal Business Server (PBS)LCSPath
Pivotal Integration for Microsoft OutlookOIPath
Pivotal ClientSCClientPath
Pivotal ClientSCCustomPath
Pivotal ClientSCPBSPath
Pivotal Smart Client FrameworkSCCPath
Pivotal Contact Management ModuleCOREPath
Pivotal Contact Management ModulePCMPath
Pivotal Sales Force Automation ModuleSFAPath
Pivotal Contact Center ModuleCCPath
Pivotal Customer Service & Support ModuleCSSPath
Pivotal Data Management ModuleDMMPath
Pivotal Application Workflow ModuleAWMPath
Pivotal Document Management ModulePDMPath
Pivotal Marketing Resource Management ModuleMRMPath
Pivotal Email Marketing ModuleEMPath
Pivotal Social CRM ModuleSCRMPath
Pivotal Territory Management ModuleTMPath
Pivotal Web Portal ModuleWPPath


So why you need to care about this? Because all the Pivotal source code, visual studio templates, etc. have references to these variables. If you open one of the projects (using a text editor) for one of the modules (the source code is installed as part of the module installation) you will notice that there are a bunch of references using this style:

<Reference Include="Applications.Core.Common, Version=6.0.6.0, Culture=neutral, PublicKeyToken=47e8a196c40562a5, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(COREPath)\Common\Applications.Core.Common.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Applications.Core.Data, Version=6.0.6.0, Culture=neutral, PublicKeyToken=47e8a196c40562a5, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(COREPath)\Common\Applications.Core.Data.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Applications.Core.Server, Version=6.0.6.0, Culture=neutral, PublicKeyToken=47e8a196c40562a5, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(COREPath)\Server Tasks\Applications.Core.Server.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Applications.Core.Server.InteractionServices, Version=6.0.6.0, Culture=neutral, PublicKeyToken=47e8a196c40562a5, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(COREPath)\Server Tasks\Applications.Core.Server.InteractionServices.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Css.Common.WorkingDaysCalendar, Version=6.0.8.0, Culture=neutral, PublicKeyToken=47e8a196c40562a5, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(CSSPath)\Common\Css.Common.WorkingDaysCalendar.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Css.Data, Version=6.0.8.0, Culture=neutral, PublicKeyToken=47e8a196c40562a5, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(CSSPath)\Common\Css.Data.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Css.Server.FormTasks.ServiceTicket, Version=6.0.8.0, Culture=neutral, PublicKeyToken=47e8a196c40562a5, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(CSSPath)\Server Tasks\Css.Server.FormTasks.ServiceTicket.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Pivotal.Core, Version=6.0.0.0, Culture=neutral, PublicKeyToken=1ef445b59b9ece60, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(LCSPath)\Pivotal.Core.dll</HintPath>
  <Private>False</Private>
</Reference>
<Reference Include="Pivotal.Core.Common, Version=6.0.0.0, Culture=neutral, PublicKeyToken=1ef445b59b9ece60, processorArchitecture=MSIL">
  <SpecificVersion>False</SpecificVersion>
  <HintPath>$(LCSPath)\Pivotal.Core.Common.dll</HintPath>
  <Private>False</Private>
</Reference>

Notice the highlighted lines, they have references to the environment variables mentioned before. So, if your development machine doesn’t have the environment variables configured then the project will not compile since Visual Studio will not be able to find those references.

The reference paths in the Toolkit

So, now that you understand all the environment variables, and how to make your projects compile, comes the time when you need to upload the assembly into the BM using the toolkit. You use the Code File Importer. But when importing you get an error saying that it is not able to find a particular assembly. This is because the Toolkit analyses the assembly and checks its references, so it needs to know where those references are. You need to tell the Toolkit where to find those assemblies. You do this using the Reference Paths tab in the Code File Importer:

Code File Importer

You will need to add a reference path (a folder) to each of the assemblies’ directories you are referencing. The following is a list of all the folders you need to add:

$(LCSPath)
 $(COREPath)\Client Tasks
 $(COREPath)\Common
 $(COREPath)\Server Tasks
 $(PCMPath)\Client Tasks
 $(PCMPath)\Common
 $(PCMPath)\Server Tasks
 $(OIPath)
 $(SCCPath)
 $(SCCustomPath)
 $(SCClientPath)
 $(SCPBSPath)
 $(AWMPath)\Client
 $(AWMPath)\Common
 $(AWMPath)\Server
 $(CCPath)\Client
 $(CCPath)\Common
 $(CCPath)\Server
 $(CSSPath)\Client Tasks
 $(CSSPath)\Common
 $(CSSPath)\Server Tasks 
 $(DMMPath)\Client Tasks
 $(DMMPath)\Common
 $(DMMPath)\Data
 $(DMMPath)\Server Tasks
 $(EMPath)
 $(MRMPath)\Client Tasks
 $(MRMPath)\Data
 $(MRMPath)\Server Tasks
 $(PDMPath)\Application Pages
 $(PDMPath)\Client Tasks
 $(PDMPath)\Data
 $(PDMPath)\Server Tasks
 $(SCRMPath)
 $(SFAPath)\Client Tasks
 $(SFAPath)\Data
 $(SFAPath)\Server Tasks
 $(TMPath)
 $(WPPath)

Notice that each module specifies more than one folder. This is because some modules have different folders for client and server tasks.

Adding these folder one by one is a long task. The good news is that these folders are stored in the registry so you can create a .reg file to add them all at once. The registry path where these references are stores is the following:

For a 64bit OS: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Pivotal\Relationship\Connections\CRM CM\Code File Importer\Search Folders

For a 32bit OS: HKEY_LOCAL_MACHINE\SOFTWARE\Pivotal\Relationship\Connections\CRM CM\Code File Importer\Search Folders

To add all them at once, just import this file (remove the Wow6432Node part if you’re using a 32bit OS):

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Pivotal\Relationship\Connections\CRM CM\Code File Importer\Search Folders]
"Folder0"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Application Workflow Module 6.0.9\\Assemblies\\Client"
"Folder1"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Application Workflow Module 6.0.9\\Assemblies\\Common"
"Folder2"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Application Workflow Module 6.0.9\\Assemblies\\Server"
"Folder3"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Contact Center 6.0.9\\Assemblies\\Client"
"Folder4"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Contact Center 6.0.9\\Assemblies\\Common"
"Folder5"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Contact Center 6.0.9\\Assemblies\\Server"
"Folder6"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Customer Service and Support 6.0.8\\Assemblies\\Client Tasks"
"Folder7"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Customer Service and Support 6.0.8\\Assemblies\\Common"
"Folder8"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Customer Service and Support 6.0.8\\Assemblies\\Server Tasks"
"Folder9"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Data Management Module 6.0.9\\Assemblies\\Client Tasks"
"Folder10"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Data Management Module 6.0.9\\Assemblies\\Common"
"Folder11"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Data Management Module 6.0.9\\Assemblies\\Data"
"Folder12"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Data Management Module 6.0.9\\Assemblies\\Server Tasks"
"Folder13"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Document Management 6.0.9\\Assemblies\\Application Pages"
"Folder14"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Document Management 6.0.9\\Assemblies\\Client Tasks"
"Folder15"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Document Management 6.0.9\\Assemblies\\Data"
"Folder16"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Document Management 6.0.9\\Assemblies\\Server Tasks"
"Folder17"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Email Marketing 6.0.10\\Assemblies"
"Folder18"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Marketing Resource Management 6.0.9\\Assemblies\\Client Tasks"
"Folder19"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Marketing Resource Management 6.0.9\\Assemblies\\Data"
"Folder20"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Marketing Resource Management 6.0.9\\Assemblies\\Server Tasks"
"Folder21"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Pivotal Contact Management 6.0.8\\Assemblies\\Core\\Client Tasks"
"Folder22"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Pivotal Contact Management 6.0.8\\Assemblies\\Core\\Common"
"Folder23"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Pivotal Contact Management 6.0.8\\Assemblies\\Core\\Server Tasks"
"Folder24"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Pivotal Contact Management 6.0.8\\Assemblies\\PCM\\Client Tasks"
"Folder25"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Pivotal Contact Management 6.0.8\\Assemblies\\PCM\\Common"
"Folder26"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Pivotal Contact Management 6.0.8\\Assemblies\\PCM\\Server Tasks"
"Folder27"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Sales Force Automation 6.0.8\\Assemblies\\Client Tasks"
"Folder28"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Sales Force Automation 6.0.8\\Assemblies\\Data"
"Folder29"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Sales Force Automation 6.0.8\\Assemblies\\Server Tasks"
"Folder30"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Social CRM 6.0.10\\Assemblies"
"Folder31"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Territory Management 6.0.11\\Assemblies"
"Folder32"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Business Server"
"Folder33"="C:\\Program Files (x86)\\Grupo Lanka\\Pivotal CRM\\Framework Library\\Assemblies"
"Folder34"="C:\\Program Files\\CDC Software\\Smart Updater\\40\\Packages\\PivotalClient\\Client\\6.0.9.1\\bin\\standard"
"Folder35"="C:\\Program Files\\CDC Software\\Smart Updater\\40\\Packages\\PivotalClient\\Custom\\6.0.9.1\\bin\\standard"
"Folder36"="C:\\Program Files\\CDC Software\\Smart Updater\\40\\Packages\\PivotalClient\\PBS\\6.0.9.0\\bin\\standard"
"Folder37"="C:\\Program Files\\CDC Software\\Smart Updater\\40\\Packages\\PivotalOfficeIntegration\\OIRuntime\\6.0.9.2\\bin\\Standard"
"Folder38"="C:\\Program Files\\CDC Software\\Smart Updater\\40\\Packages\\Scc\\4.0.1.58"
"Folder39"="C:\\Program Files (x86)\\CDC Software\\Pivotal CRM\\Applications\\Web Portal 6.0.11\\Assemblies"

Notice that the paths I’m specifying above might not work in your particular case, so be careful when importing this file into your registry. At least check that the versions you have are the correct ones and if not modify the file to match your particular versions. Try to import it several times if it doesn’t work at the first try.

A healthy development machine

Although all the environment variables are automatically created by either the component or module installation (I’ve seen cases when this doesn’t happen though), the configuration of the development machine to take into consideration the right variables, the configuration of the Toolkit references, etc. is a hard and long task.

Fortunately, there is help. Grupo Lanka has created a set of development tools. One of these tools is the Environment Check tool, which takes care of checking that a development machine is well configured with all the environment variables. The cool thing about this tool is that it also creates and fixes the environment variables. The tool is smart enough to figure out the version of the component/module that is installed and creates the appropriate variables and reference paths in the Toolkit pointing to the right path. If you install a new version of the product (which changes the paths), the tool can automatically update your environment to match the new paths. I highly recommend it since it can save you a lot of headaches. Here is a screenshot of the tool:

Environment Check