8 Comments

The use of PCL projects and paradigms such as MVVM allow for a great code sharing between all the different platform projects. The recently announced shared projects in Visual Studio 2013 Update 2 and the support of shared projects by Xamarin improve the code sharing when developing for multiple platforms. I’ve been using Xamarin tools for Visual Studio to create cross platform applications for mobile devices, but this is not enough. If you truly want to use MVVM, data binding, etc. then you need to resort to complementing frameworks. I’ve been using MvvmCross, as it is one of the most complete and mostly adopted frameworks for using MVVM in non-windows platform developments (such as in iOS and Android) with Xamarin.

MvvmCross is an excellent framework and library. It provides nuget packages for the core and a set of plugins for platform specific functionality. It’s very easy to adopt and understand, thanks to the great tutorials published by its author, Stuart Lodge, in its N+1 series. In all, is a must if you want to create cross platform applications. However, there is one thing I don’t like about MvvmCross: internationalization support.

Internationalization of applications using MvvmCross (explained in this N+1 series video) in my opinion has the following shortcomings:

  • Looks like a proprietary (non-standard) way of implementing internationalization
  • It is based on platform specific plugins
  • Localization resources are defined in the platform projects and not in the PCL project. This means that you will have to repeat the strings and localization on each platform.
  • It is based on JSON (not that I don’t like JSON, but I don’t like it when used in something that it is not standard)
  • There are no tools that will allow to easily translate the strings

Stuart, at the end of the video, suggests that there might be other ways to do internationalization, by using resource files. Resource files are supported by Visual Studio and also by Xamarin. However, resource files are not supported in the platform specifics. At the end of the video Stuart points at this url for an alternative approach to internationalization using resource files. After reading the article I was not satisfied, the article was not clear and I had a difficult time following it. I thought that there must be a better approach to it.

I wanted a solution to internationalization that complies with the following guidelines:

  • It must be based on resource files
  • Resource files should be maintained in only one project
  • I should be able to use the excellent Multilingual App Toolkit tool to maintain the translation
  • It should use the platform specific way of implementing internationalization

Finding a solution to comply with all of the above guidelines was not easy, so I wanted to share the findings so other people will save time digging in the internet for a better solution to internationalization.

I will use a solution such as the one used in the article A sample cross platform application using MVVM, but you can use any existing solution that has a PCL project and platform specific projects for WinRT, Windows Phone, iOS and/or Android.


Using the Multilingual App Toolkit in a PCL project

I found this article on the internet that explains how to use the Multilingual App Toolkit in a PCL project. The article explained what I wanted to accomplish, but it didn’t explain all that needs to be done in order to make it work. It lacks some important steps to actually have it all working. The article also shows part of the whole solution: it only explains the internationalization for WinRT and Windows Phone platform and doesn’t explain how to extend this to iOS and Android projects. I suggest you read the article since it provides a good explanation of the difference of resource files between PCL, WinRT and Windows Phone. Following is a more detailed explanation of how the use the Multilingual App Toolkit in a PCL project

I’m assuming that you have already installed the Multilingual App Toolkit extension in your Visual Studio. If you haven’t please do install it before continuing.

We will use the Windows Phone project as the main project for our internationalization work. The reason is because Windows Phone and PCL projects use the same .resx files for resources, and also because it is much easier to use the Multilingual App Toolkit from a Windows Phone project.

Resource file created as part of the WP8 projectWhen you create a Windows Phone project, the Visual Studio template includes a Resources\AppResources.resx resource file. We will add a copy of this resource file in the PCL project since this will need the resource file as part of its file structure. We will then remove the Windows Phone resource file and add a link to the resource file defined in the PCL.

Follow these steps:

  • In the PCL project, create a Resources folder
  • Add a copy of the AppResources.resx file located in the Windows Phone Resources folder
  • Make sure that the custom tool used for this added file is PublicResXFileCodeGenerator, and that the resource file has also a designer file associated with it.
  • Delete the AppResources.resx file from the Resources folder of the Windows Phone project
  • In the Windows Phone project, in the Resources folder, add a link to the AppResources.resx file of the PCL project
  • Make sure that the custom tool used for this linked file is empty
  • In the Windows Phone project, delete the file called LocalizedStrings.cs

Your solution structure should now look as this:

Solution structure after performing the steps

Now, follow these steps:

  • With the Windows Phone project selected, in Visual Studio menu go to “Tools” and then select “Enable Multilingual App Toolkit”. This will add the pseudo language to the project (which you can delete if you want to).
  • Right click on the Windows Phone project and from the menu select the “Add Translation Language…" option. This will open the Multilingual App Toolkit dialog to add languages to your project.
  • Add as many languages as you need. In my case, and for this example, I’m adding Spanish and Italian (no culture specific)
  • In the PCL project, select the option to “Show All Files” on the Solution Explorer
  • In the PCL project, refresh the Resources folder. You should see a .resx file for each of the added languages.
  • Select all then .resx files and then right click and choose “Include in Project”. This will add those resources files to the PCL project.

Your solution structure should now look as this:

Solution structure after adding the new languages

Now, we need to add a service to our PCL project to provide translated strings. Follow these steps:

  • In the PCL project, open the default resource file, Resources\AppResources.resx, and change the access modifier from “Internal” to “Public”

Access Modifier

  • In the Services folder, add a new class called ResourceService.cs and replace the code with the following:
using SampleApp.Resources;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace SampleApp.Services
{
    public class ResourceService
    {
        public static AppResources Resources;

        public AppResources LocalizedResources
        {
            get
            {
                return Resources ?? (Resources = new AppResources());
            }
        }
    }
}
  • On the Windows Phone project, open App.xaml and replace the following line:
<local:LocalizedStrings xmlns:local="clr-namespace:SampleApp" x:Key="LocalizedStrings"/>

with this line:

<services:ResourceService x:Key="LocalizedStrings" />

make sure you define the services namespace to point to the PCL assembly.

  • On the WinRT project, open App.xaml and add the following resource:
<services:ResourceService x:Key="LocalizedStrings" />

make sure you define the services namespace to point to the PCL assembly.

Now, we need to configure the projects. Follow these steps:

  • In the Windows Phone project, right click and choose “Properties” to see the project properties. In the “Application” tab, make sure that the section “Supported Cultures” list all the languages available for the application.
  • In the WinRT project, open the manifest file, Package.appxmanifest, and in the “Application” tab make sure that “Default language” is set to the same language used as your default language in the Windows Phone project.

Now you can start using the Multilingual App Toolkit to translate your resources. As an example, let’s create a new string:

  • In the Windows Phone project, double click the Resources\AppResources.resx file
  • Add a new string (in this example “users”) and its translation to the default language (in this example “en”)

Resource file

  • Compile the solution
  • The Multilingual App Toolkit will update the .xlf files, and also update the .resx files, which are located in the PCL project file structure.
  • On the Windows Phone project, double click any of the .xlf files to open the Multilingual Editor and start doing the translation of the new strings

Multilingual editor

At this point we have the following:

  • All resource files are in the PCL project
  • The main resource file is edited from the Windows Phone project
  • We can use the Multilingual App Toolkit, from the Windows Phone project, to edit translations
  • We have a service in the PCL project that will provide translated strings for all projects

Let’s see now how can we use this to translate the user interface of a WinRT and a Windows Phone project (iOS and Android will be explained later)


Translation in the PCL

The PCL has no user interface, so we need to worry only about how to get translated strings from code. Resource files automatically generate a strongly typed class with all the resources (see the associated file AppResources.Designer.cs file). We can use this class to get access to the strings:

string label = AppResources.users;


Translation in Windows Phone

Remember, from all the previous steps, that we have a ResourceService class in our PCL project to provide translated strings. We have created a static resource in the App.xaml that creates an instance of this service.

We can get translated strings in XAML by using data binding against the service. Here is an example for the string “users” created before:

<TextBlock Text="{Binding LocalizedResources.users, Source={StaticResource LocalizedStrings}}" />

Notice the use of the static resource defined in App.xaml, which we named LocalizedStrings. From code, we can use the same service, as follows:

label.Text = AppResources.users;


Translation in WinRT

We use a similar approach to the translation in Windows Phone. We have also created the static resource in App.xaml. In XAML you get a translated string as follows:

<TextBlock Text="{Binding LocalizedResources.users, Source={StaticResource LocalizedStrings}}" />

Notice the use of the static resource defined in App.xaml, which we named LocalizedStrings. From code, we can use the same service, as follows:

label.Text = AppResources.users;

There is one drawback to doing translation this way in WinRT: we don’t get to use the x:Uid way of translating UI elements. But this is not a problem since this way only applies to WinRT and we want to do translations once for all platforms.


Translation in Other Platforms

Let’s see how we are on our goals:

  • It must be based on resource files: Yes! We use resource files to declare our strings
  • Resource files should be maintained in only one project: Yes! We use the Windows Phone project to maintain the resources
  • I should be able to use the Multilingual App Toolkit tool to maintain the translation: Yes! We use the Multilingual App Toolkit from the Windows Phone project to translate our strings.
  • It should use the platform specific way of implementing internationalization: Yes for Windows Phone and WinRT, no yet for iOS and Android.

iOS and Android do not use resource files, they use their own method of declaring internationalization strings. So far our solutions doesn’t work for these two platforms.

There is a very good article by Chris Miller in Code Magazine called Cross-Platform Localization for Mobile Apps where the author provides a very good explanation of how internationalization should be done in iOS and Android.

Resources in Android

In Android, resources are specified in XML files, and are contained in folder structure inside the Resources folder, where each folder is specified as Values-<languageCode>:

Android resources

The contents of a resource file is as follows:

Android resources XML

Resources in iOS

In iOS, resources are specified as text files, and are contained in a folder structure in the root of the project, where each folder is specified as <languageCode>.lproj:

iOS resources

Each folder has a Localizable.strings file containing the following structure:

iOS resources strings


A Partial Solution

In the Code Magazine article, the author solves the gap between resource files and the iOS/Android method using a very clever approach based on T4 templates. The author provides the T4 templates in the article, but I thought these could be improved. The ones provided by the article require some manual modifications before you can use them. Also, in the case of iOS you have to use string literals in order to reference the strings, and we don’t like to use string literals, we would rather use an auto-generated class with all the strings as constants. So I think this is a partial solution to the problem.

As I did in the case of the resources files for windows, I thought I could give a better approach to the one provided in Code Magazine. So, here we go.


An Improved T4 Template to Generate iOS/Android String Resources

For this we will need a couple of extensions from the Visual Studio gallery. We need the T4 Toolbox, and optionally if we want to edit the T4 templates, a T4 editor such as the Tangible T4 Editor. Once the extensions are installed, restart Visual Studio and reopen your solution.

We will create the T4 templates in our PCL project. In the PCL project, under the Resources folder, add a new T4 template (select a “Text Template” item from the “Add New Item…” dialog box in VS). Call the template Resx2OthersTemplate.tt, and replace its content with the following:

<#@ include file="T4Toolbox.tt" #>
<#@ assembly name="System.Core" #>
<#@ assembly name="System.Xml" #>
<#@ assembly name="System.Xml.Linq" #>
<#@ Assembly Name="System.Windows.Forms" #>
<#@ import namespace="System" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Diagnostics" #>
<#@ import namespace="System.Linq" #>
<#@ import namespace="System.Collections" #>
<#@ import namespace="System.Collections.Generic" #> 
<#@ import namespace="System.Text.RegularExpressions" #>
<#@ import namespace="System.Xml" #>
<#@ import namespace="System.Xml.Linq" #>
<#@ import namespace="Microsoft.VisualStudio.TextTemplating" #>
<#+ 

public static class ResxUtilities
{
    public static Dictionary<string, string> GenerateResources(string resourceName)
    {
        Dictionary<string, string> items = new Dictionary<string, string>();
        string locale = GetLocale(resourceName);

        if (locale != "")
        {
            locale = "_" + locale;
        }

        // Read in the .resx file and collect the data elements
        if (File.Exists(resourceName))
        {
            XDocument document = XDocument.Parse(File.ReadAllText(resourceName));

            foreach(var item in document.Element("root").Elements("data")) 
            { 
                string Name = EscapeName(item);
                string Value = EscapeValue(item);

                items.Add(Name, Value);
            }
        }

        return items;
    }

    public static string GetNameSpace(string filename)
    {
        string [] words = filename.Replace(".\\", "").Split(new char[] {'.'});
        return words[0];
    }

    public static string GetLocale(string filename)
    {
        filename = Path.GetFileName(filename);
        string [] words = filename.Replace(".\\", "").Split(new char[] {'.'});

        if (words.Length > 2)
        {
            return words[1];
        }
        else
        {
            return "";
        }

    }

    public static string EscapeName(XElement item)
    {
        string name = item.Attribute("name").Value;
        return Regex.Replace(name, "[^a-zA-Z0-9_]{1,1}", "_");
    }

    public static string EscapeValue(XElement item)
    {
        XElement vitem = item.Descendants().FirstOrDefault();
        string name = vitem.Value;
        name = name.Replace("'", "\\'");
        return name;
    }

    public static string GetLanguage(string filename)
    {
        string lang = null;
        var f = Path.GetFileName(filename);
        var foo = f.Split('.');
    
        if (foo.Count() > 2)
        {
            if (foo[1].Length == 2)
            {
                lang = foo[1];
            }
        }
        if (foo.Count() == 2)
        {
            lang = "en";
        }
    
        return lang;
    }
}

public class Resx2AndroidTemplate : Template
{
    public string ResxFileName {get; set;}

    public override string TransformText()
    {
        string fullname = this.Context.Host.ResolvePath(ResxFileName);
        Dictionary<string, string> items = ResxUtilities.GenerateResources(fullname);

        int l = items.Count;

        WriteLine("<?xml version=\"1.0\" encoding=\"utf-8\"?>");
        WriteLine("<resources>");

        foreach(KeyValuePair<string, string> pair in items)
        {
            Write("  <string name=\"");
            Write(pair.Key);
            Write("\">");
            Write(pair.Value);
            WriteLine("</string>");
        }
        WriteLine("</resources>");

        return this.GenerationEnvironment.ToString();
    }
}

public class Resx2iOSTemplate : Template
{
    public string ResxFileName {get; set;}

    public override string TransformText()
    {
        string fullname = this.Context.Host.ResolvePath(ResxFileName);
        Dictionary<string, string> items = ResxUtilities.GenerateResources(fullname);

        int l = items.Count;

        foreach(KeyValuePair<string, string> pair in items)
        {
            WriteLine(String.Format("\"{0}\"=\"{1}\";", pair.Key, pair.Value));
        }

        return this.GenerationEnvironment.ToString();
    }    
}

public class Resx2ClassTemplate : CSharpTemplate
{
    public string ResxFileName {get; set;}

    public override string TransformText()
    {
        #>
namespace <#= DefaultNamespace #>
{
    public static class Strings
    {
<#+
        string fullname = this.Context.Host.ResolvePath(ResxFileName);
        Dictionary<string, string> items = ResxUtilities.GenerateResources(fullname);

        foreach(KeyValuePair<string, string> pair in items)
        {
#>
        public static string <#= Identifier(pair.Key) #> = "<#= pair.Key #>";
<#+
        }
#>
    }
}
<#+
        return this.GenerationEnvironment.ToString();
    }    
}
#>

In the same folder add another T4 template and call it Resx2Others.tt, and replace its content with the following:

<#@ template debug="true" hostSpecific="true" #>
<#@ output extension="log" #>
<#@ import namespace="System.Text" #>
<#@ import namespace="System.IO" #>
<#@ assembly name="EnvDTE" #>
<#@ include file="Resx2OthersTemplate.tt" #>
<#
    // Create instances of the templates for iOS and Android
    Resx2AndroidTemplate androidTemplate = null;
    Resx2iOSTemplate iosTemplate = null;
    Resx2ClassTemplate classTemplate = new Resx2ClassTemplate();

    var hostServiceProvider = (IServiceProvider)Host;
    var dte = (EnvDTE.DTE)hostServiceProvider.GetService(typeof(EnvDTE.DTE));        

    foreach(EnvDTE.Project project in dte.Solution.Projects)
    {
        string projectName = project.Name.ToLower();
        if(projectName.Contains("ios") || projectName.Contains("touch"))
        {
            iosTemplate = new Resx2iOSTemplate();
            iosTemplate.Output.Project = project.FullName;
            iosTemplate.Output.ItemType = "Content";
        }
        else if(projectName.Contains("droid"))
        {
            androidTemplate = new Resx2AndroidTemplate();
            androidTemplate.Output.Project = project.FullName;
            androidTemplate.Output.ItemType = "AndroidResource";
        }
        
    }
    
    // Set the current directory to the .tt folder
    Directory.SetCurrentDirectory(Path.GetDirectoryName(Host.TemplateFile));

    // Set the file mask for the resx files to read from
    var files = Directory.GetFiles(".", "AppResources*.resx");

    foreach(var resxFile in files)
    {
        WriteLine("Processing file {0}", resxFile);

        // Fix up the file name
        string resxFileName = resxFile.Replace(".\\","");
                
        string locale = ResxUtilities.GetLocale(resxFile);

        if (!(locale.Equals("qps-ploc", StringComparison.CurrentCultureIgnoreCase)))
        {
            if (!string.IsNullOrWhiteSpace(locale))
            {
                locale = "-" + locale.Replace("-", "-r");
            }

            // Android
            if(androidTemplate != null)
            {
                androidTemplate.ResxFileName = resxFileName;
                string androidStringsFolder = @"Resources\Values" + locale;

                // Set the destination filename and path and transform the resource
                androidTemplate.Output.File = Path.Combine(androidStringsFolder, Path.GetFileName(Path.ChangeExtension(ResxUtilities.GetNameSpace(resxFile), ".xml")));
                androidTemplate.Output.Encoding = Encoding.UTF8;
                androidTemplate.Render();
            }

            // iOS
            if(iosTemplate != null)
            {
                iosTemplate.ResxFileName = resxFileName;
                // Don't need the locale, just the language
                var lang = ResxUtilities.GetLanguage(iosTemplate.ResxFileName);

                if (lang != null) 
                {
                    iosTemplate.Output.File = Path.Combine(lang + ".lproj",  "Localizable.strings");
                    iosTemplate.Output.Encoding = Encoding.UTF8;
                    iosTemplate.Render();
                }
            }

            // generate a class file with constants only for the main resource file (which doesn't specify a locale)
            if(String.IsNullOrWhiteSpace(locale))
            {
                classTemplate.Output.File = "Strings.cs";
                classTemplate.ResxFileName = resxFileName;
                classTemplate.Render();
            }
        }
    }
#>

This is the main T4 template, the one that needs to run. Notice that this template includes the Resx2OthersTemplate.tt template. These templates are just a rewrite of the templates found in the Code Magazine article, so the credit should go to Chris Miller for coming out with this solution. All I did was to improve the idea, adding some automations to remove constants and also to create a strong typed class with all the string names to be used in code. This is a summary of the improvements I made:

  • The template automatically detects if a iOS and/or Android project is part of the solution. If an iOS/Android is not part of the solution then it skips the creation of strings for that platform.
  • The template automatically finds out the name of the projects, so there is no need to specify these in the template
  • The template automatically generates a Strings.cs file in the PCL project providing constants to be used instead of string literals in the code.
  • The template sets the appropriate content type for iOS to “Content” and for Android to “AndroidResource”
  • The code was changed to be more modular

T4 templates do not run automatically on build (unless you have the pro version of the Tangible T4 editor). There are some solutions in the internet to have T4 templates run on build, but I will not mention them here since this article is already too long. For now, remember to run the T4 template (right click on Resx2Others.tt and choose from the menu the option “Run Custom Tool”) every time you modify your translations. Of course you don’t need to do this on every build, I guess that the way you translate an app is to start with one (e.g. Windows Phone) and once you have all the app translated, you use the Multilingual App Toolkit to translate strings, and then you run the T4 template to update the translation strings for both iOS and Android.


Translation in Android

When you run the T4 template you will automatically generate the Values-<languageCode> folders and the AppResources.xml files with all the strings. To translate elements of the UI you will use the following:

<TextView
    android:text="@string/users"
    android:layout_width="fill_parent"
    android:layout_height="wrap_content"
    android:id="@+id/textView1" />

In code you will use the following:

label.Text = Resources.GetString(Resource.String.users);


Translation in iOS

When you run the T4 template you will automatically generate the <languageCode>.lproj folders and the Localizable.strings files with all the strings. To translate in code you will use:

label.Text = NSBundle.MainBundle.LocalizedString(Strings.users, Strings.users);

Notice that we are using a class named Strings instead of the string literals. This class was generated as part of the T4 template and is included in the PCL project. This class is generated as follows:

public static class Strings
{
    public static string ResourceFlowDirection = "ResourceFlowDirection";
    public static string ResourceLanguage = "ResourceLanguage";
    public static string ApplicationTitle = "ApplicationTitle";
    public static string AppBarButtonText = "AppBarButtonText";
    public static string AppBarMenuItemText = "AppBarMenuItemText";
    public static string users = "users";
}

With this auto-generated class we avoid the use of string literals in our projects.


Sample App

You can find a sample VS project here:


Conclusion

Let’s review again our goals:

  • It must be based on resource files: Yes! We use resource files to declare our strings
  • Resource files should be maintained in only one project: Yes! We use the Windows Phone project to maintain the resources
  • I should be able to use the Multilingual App Toolkit tool to maintain the translation: Yes! We use the Multilingual App Toolkit from the Windows Phone project to translate our strings.
  • It should use the platform specific way of implementing internationalization: Yes! for Windows Phone, WinRT, iOS and Android.

I think that this approach is a much better one than the one used in MvvmCross. Let me know what you think, leave a comment!

0 Comments

Read Only Views (ROV) were introduced in Pivotal 6. A ROV is basically a view in the database that is treated as a Pivotal table, so you can create forms, searches and queries associated to it. ROVs are very powerful and allow for diverse scenarios including integration with third party systems, reporting, and even BI. I will assume that you know how ROVs work in Pivotal 6 and in this article I will explain how you can use ROVs in Pivotal 5.9.

There are a lot of clients who are still using Pivotal 5.9 (either Rich Client or Windows Client) who have asked me how can they integrate external data into their CRM in an easy way. The answer so far has been to put in place a complex data synchronization scenario between the external data source and a Pivotal table. ROVs are the perfect solution for this kind of problems, where you can basically define a view over the external data (using linked servers if on MS SQLServer, or data links if on Oracle) and have it treated as a normal table in Pivotal. The problem is that ROV do not work in Pivotal 5.9, or do they?

The good news is that it is possible to have ROVs work in Pivotal 5.9, with some tricks of course. The bad news is that this is probably not supported by Aptean. Next, I will explain how to configure ROVs in your Pivotal 5.9 system.

You can trick Pivotal into using a view, thinking it is using a normal table: you just create any table using the Toolkit, and then delete that table on the database, then create a view with the same name and schema as the Pivotal table (with the Rn_ fields and all) and voila! you are using a view as a Pivotal table. This works, but the problem here is when you try to do an Apply Customization Changes (ACC) using the Pivotal Administration Console (PAC): Pivotal will look for a table and it will not find any (there is a view instead) and so it will try to create such table and here you get into trouble because you cannot have a table and a view with the same name. Result: after your first ACC you will have an inconsistent system. This does not work, we need something else.

It turns out that the software that handles the creation and maintenance of ROV is the PAC. If you have an up-to-date 5.9 system chances are that you are currently using a PAC version 6.0.8 or above (this is the one introduced with the change in the licensing schema, getting rid of the old and troublesome License Server). If you don’t have PAC with version 6.0.8 or above, then the steps below will not apply to you and you need to deal with ROV some other ways (it is still possible though).


The Table

Let’s do a simple example: suppose that we need to display a calendar for all the internal and external (ePartner) activities. We can have a table called Activites that is a union of all the contact activities (those entered by ePartner users and saved in the Contact_Activities table) and internal activities (those entered by the CRM users and saved in the Rn_Appointments table) defined in our system (a more useful example would be to use external data, but for the sake of simplicity this example will be enough). Let’s create the table in Pivotal first:

Activity table

We’ve created a table with the some of the common fields between the Rn_Appointments and the Contact_Activities tables. This is a list of those fields:

Access_Type
Activity_Complete
Activity_Complete_Date
Activity_Type
Appt_Date
Appt_Description
Due_Date
End_Time
Location
Notes
Start_Time
Time_Zone_Id

Now, do an ACC and have the PAC create the table in the database. So far, nothing new, we have a new table. Now comes the fun part. The steps that follow need to be done in the database. There are some minor differences between Oracle and SQLServer, so I will provide notes when something differs between them. When I mention that something needs to be done in the ED, in SQLServer this means that you will perform the steps in the ED database, while in Oracle this mean that you will login as the ED schema user. The same applies to the BM, when I say that something needs to be done in the BM, in SQLServer it means that you need to perform the steps in the BM database, while in Oracle it means that you need to login as the BM schema user.

In the ED, go ahead and delete the table from the database (yes, delete the table!):

drop table activity;


Configuration of the BM

We need to modify the schema of the Tables table in the BM. The BM contains the metadata for the entire Pivotal system, and the tables are defined in the Tables table. We need to add two fields to this table. In the BM, execute the following:

-- Oracle
alter table Tables add Table_Type number(3,0);
alter table Tables add Read_Only_View clob;

-- SQLServer
alter table Tables add Table_Type tinyint
alter table Tables add Read_Only_View ntext
go

We added two new fields to the table: the Table_Type, as its name suggests, is to identify the type of the table, ROVs are identified with a table type of 2. The Read_Only_View field will contain the view definition. Let’s update these two fields.

-- Oracle
update Tables set Table_Type = 2 where Table_Name = 'Activity';
commit;
-- SQLServer
update Tables set Table_Type = 2 where Table_Name = 'Activity'
go


The View Definition

Now, let’s define the view that will replace the table. The view schema has to match exactly the schema of the table, including the Rn_ fields and lengths and types. If the schemas do not match you will get a complaint from PAC when doing the ACC.

-- Oracle
select 
    a.Rn_Appointments_Id Activity_Id,
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    a.Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    a.Time_Zone Time_Zone_Id
from
    Rn_Appointments a
    
union all

select 
    cast('000A' || substr(cast(a.Contact_Activities_Id as varchar2(16)),5,12) as raw(8)) Activity_Id,    
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    null Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    null Time_Zone_Id
from
    Contact_Activities a
    
    

-- SQLServer
select 
    a.Rn_Appointments_Id Activity_Id,
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    a.Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    a.Time_Zone Time_Zone_Id
from
    Rn_Appointments a
    
union all

select 
    cast(0x000A + substring(a.Contact_Activities_Id,3,6) as binary(8)) Activity_Id,    
    cast(a.Appt_Description as varchar(80)) Rn_Descriptor,
    a.Rn_Create_Date,
    a.Rn_Edit_Date,
    a.Rn_Create_User,
    a.Rn_Edit_User,
    a.Access_Type,
    a.Activity_Complete,
    a.Activity_Completed_Date,
    a.Activity_Type,
    a.Appt_Date,
    a.Appt_Description,
    null Due_Date,
    a.End_Time,
    a.Location,
    a.Notes,
    a.Start_Time,
    null Time_Zone_Id
from
    Contact_Activities a

You can see both version of the view for Oracle and SQLServer. There are some things to keep in mind when creating the ROV. In our example, in lines 4 and 53, the Rn_Descriptor field is cast to varchar(80) since the Table_Descriptor type is of length 80 (and the view schema and the table schema need to match). The other important thing is the formula used to calculate the Id of the new table. We want to use the same Id as the underlying tables, but one of them (the one for Contact_Activities) will have an offset in order to avoid duplication of Ids. Notice in line 27 how we calculate the offset in the case of Oracle, and in line 76 the calculation of the offset for SQLServer. At the end this is just a manipulation of Pivotal Ids.

Make sure you update the Read_Only_View field in the Tables table for the Activity table with the query for the view as defined above.

Now, do an ACC. The first time you do an ACC after you’ve configured the ROV, the PAC might propose to add the missing table and also to create the view. Don’t accept the changes yet. The PAC proposes to create the table based on the information contained in the shadow tables in the ED. Do an ACC one more time and you should now see that only the creation of the view is proposed. Accept the changes. Now you have your ROV working and integrated in your Pivotal 5.9 system, you can start creating forms, queries, define security (must be read only), etc.

There are some things that need to be taken care of in order to have a complete solution.


Deployments

Everything could not be as good. When I did my first deployment, a replicate of the BM from the development environment to the integration environment I was surprised because the changes made to the Tables table didn’t travel with the replica. If this didn’t work then I had a big problem since I didn’t want to repeat the whole procedure every time I did a deployment. So, after some digging I found the solution: the PAC creates the structure of the BM based on the information contained in the RSys_Tables and RSys_Table_Fields tables in the BM. These are what in Pivotal jargon are know as “shadow tables”. We need to update the shadow tables to reflect our newly created fields.

In the BM run the following:

-- Oracle
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from RSys_Table_Fields),
  (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
  'Table_Type',
  10,
  null
);
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 2) as raw(8)) from RSys_Table_Fields),
  (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  1,
  null
);
commit;

-- SQLServer
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
    (select max(Table_Fields_Id) + 1 from RSys_Table_Fields),
    (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
    'Table_Type',
    10,
    null
)
insert into RSys_Table_Fields(Table_Fields_Id, Tables_Id, Field_Name, Field_Type, Field_Length) values(
    (select max(Table_Fields_Id) + 2 from RSys_Table_Fields),
    (select Tables_id from RSys_Tables where Table_Name = 'Tables'),
    'Read_Only_View',
    1,
    null
)
go

The above script just inserts the new field definitions in the RSys_Table_Fields. We use the maximum RSys_Table_Fields_Id and we add 1 and 2 respectively in order to create the Id of the field.

After this, when you do a backup of BM, replicate the BM from one system to another, or do an UFO, the view definitions will be maintained.


Metadata in the Toolkit

You should also update the metadata for the toolkit to keep the metadata consistent. You need to add the new BM fields, Table_Type and Read_Only_View, in the CM database. In the CM run the following:

-- Oracle
insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from Table_Fields),
  sysdate,
  sysdate,
  hextoraw('8000000000000001'),
  hextoraw('8000000000000001'),
  'Table_Type',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Simple_Choice'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Table_Type',
  'Table_Type',
  'Tables~~Table_Type'
);

insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select cast(utl_raw.substr(max(Table_Fields_Id),1,4) || utl_raw.cast_from_binary_integer(utl_raw.cast_to_binary_integer(utl_raw.substr(max(Table_Fields_Id),5,4)) + 1) as raw(8)) from Table_Fields),
  sysdate,
  sysdate,
  hextoraw('8000000000000001'),
  hextoraw('8000000000000001'),
  'Read_Only_View',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Memo_50x5'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  'Read_Only_View',
  'Tables~~Read_Only_View'
);

update rsys_last_id set last_id_value = (select max(table_fields_id) from table_fields) where table_name = 'Table_Fields';

commit;


-- SQLServer
insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select max(Table_Fields_Id) + 1 from Table_Fields),
  GETDATE(),
  GETDATE(),
  0x8000000000000001,
  0x8000000000000001,
  'Table_Type',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Simple_Choice'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Table_Type',
  'Table_Type',
  'Tables~~Table_Type'
)

insert into Table_Fields(
  Table_Fields_Id, 
  Rn_Create_Date,
  Rn_Edit_Date,
  Rn_Create_User,
  Rn_Edit_User,
  Field_Name, 
  Dictionary_Id,
  Ordinal,
  Tables_Id, 
  Field_Label,
  Rn_Descriptor,
  Rn_Unique_Name
) values (
  (select max(Table_Fields_Id) + 1 from Table_Fields),
  GETDATE(),
  GETDATE(),
  0x8000000000000001,
  0x8000000000000001,
  'Read_Only_View',
  (select data_dictionary_id from data_dictionary where dictionary_name = 'Rn_Memo_50x5'),
  (select max(ordinal) + 1 from Table_Fields where Tables_id = (select Tables_id from Tables where Table_Name = 'Tables')),
  (select Tables_id from Tables where Table_Name = 'Tables'),
  'Read_Only_View',
  'Read_Only_View',
  'Tables~~Read_Only_View'
)

update rsys_last_id set last_id_value = (select max(table_fields_id) from table_fields) where table_name = 'Table_Fields';

go



Performance

Unfortunately, we cannot forget about performance. This part of the article applies to ROVs in general, including ROVs in Pivotal 6.x.

Both SQLServer and Oracle are smart enough to use the indexes on the underlying tables for the view, so all the queries you perform against the ROV will require an index in the underlying tables. In our example is easy, since we only have two underlying tables and we can define indexes using the Toolkit. It is not that easy when you work with tables that are not in the same database server, by using linked servers in SQLServer or data links in Oracle. You need to be careful and test that the queries are performed in the source, otherwise you will have a performance problem.

There is one thing in particular that is not obvious, and this is the primary key (or what would be the primary key if it were a normal table) in the ROV. In our example, the primary key is the field Activity_Id. This is a calculated field which is not indexed. This means that if in Pivotal we try to open a form for a particular record of the ROV, the platform will query the view using the primary key, and this is not indexed, thus causing a performance problem.

To avoid this we will need to define an index on the calculated field. In our example, the Id for the Rn_Appointments table is the primary key of the table, with no formula, so no need to worry here. This is not the case for the Contact_Activities, where the Id has a formula.

Oracle supports the creation of function-based indexes, and thus the new index will be as follow:

-- Oracle
create unique index Contact_Activities_ROV1 on Contact_Activities (cast('000A'||substr(cast(Contact_Activities_Id as varchar2(16)),5,12) AS raw(8)));

SQLServer doesn’t support function-based indexes, but you can create an index on the view. The problem is that you can’t create an index if the view has a text or ntext field, and in our case we have one of those. So we need to resort to other technique. Let’s create a calculated field in the Contact_Activities table and then define an index on it:

-- SQLServer
alter table Contact_Activities add Contact_Activities_Id_ROV as cast(0x000A + substring(Contact_Activities_id,3,6) as binary(8))

create unique index Contact_Activities_ROV1 on Contact_Activities(Contact_Activities_Id_ROV)

go


Formula fieldYou need to create this field in the Toolkit as well, and then do an ACC.

The problem with this approach is that you will need to do this manually in all the environments, since Pivotal doesn’t allow to enter the above formula as part of the field definition. You only need to do this manually once for each environment. The formula should be respected by the PAC when doing successive deployments.

However, this is something I have not tested yet, so please be careful with this solution.





Considerations

The above was just an example of the many uses of ROV. The objective of this article was to explain how to use ROVs in Pivotal 5.9 and the steps required to make ROVs part of the platform.

Although, briefly explained, the use of ROVs doesn’t come for free, there are many aspects that you should consider before implementing ROVs in any version of Pivotal. Performance is usually the part that causes most of the troubles and it should be taken seriously when using big tables as the underlying data. Not considering performance might affect your system. There are a lot of good practices on the use of ROVs and I will try to describe some of them in future articles.

4 Comments

Pivotal stores queries using a binary format. Nobody knows (at least I don’t know anybody who does) what this binary structure is. And Pivotal doesn’t provide documentation nor does it provide any tool to extract the text from this queries. If you’ve used Pivotal for long enough you probably have whished to be able to get the text from those queries. A good example of when this could be helpful is when you make a modification to a field and would like to see what parts of the system could be affected by such change. You can query the metadata in the BM and see where the field is being used, but when it comes to queries there is no way to know this, unless of course you go through all the queries in the system and visually identify if the field is being used (ehmm.. I don’t want to do that).

I was recently working in a localization project of a Pivotal system. As part of the localization we decided to take some of the fields that were initially created as Combos and change them to Choices. (Note: Combos are saved as strings in the BD, while Choices are saved as integer and translated to string at run time, which make Choices the best data type for multi-language systems). Making this change is not easy in Pivotal, first because the Toolkit doesn’t allow it, the database doesn’t allow it either, and also because you need to change the data in the ED (I will explain how to do this automatically in a future post). Anyway, the fact is that after the change to Choice was made I had to make sure that all other objects in the BM where “aware” of the change. In the case of queries, you need to change the query (remove the field and then add it again) so it has the right metadata to treat the field in the query as string (Combo) or integer (Choice). I asked some of my coworkers how were they doing this in the past and the answer was…ehmmm… “go through all the queries one by one and see if the field is being used”. There’s no way I’m doing that! So I decided it was time to go deep in the binary format of a query.

I will first talk about the structure of the binary format for the query and then I will provide solutions for both SQLServer and Oracle. The concepts and the solutions presented here work for any version of Pivotal 5.x and 6.x.

Binary Structure of a Pivotal Query

Pivotal stores queries in a table called Saved_Lookups, which has a structure such as the following:

Saved_Lookups table structure

The actual query is saved in a field called SQL_Tree_Binary. If you query this field you will get something similar to this:

Query results for Saved_Lookups

As expected, SQL_Tree_Binary contains binary (hexadecimal) values.

First let’s talk about the table Id. This table is present in all pivotal databases: ED, BM and CM. We will focus here in queries that are created with the Toolkit for the system. We don’t want to consider those queries created in the Toolkit for the metadata (which are stored in the same table in the BM). How do we differentiate them? Using the Id (field Saved_Lookups_Id) of the record:

  • Records with an Id that starts with 0x80 are system queries
  • Records with an Id that starts with 0x00 are metadata queries

You can do this query to identify them:

-- SQLServer
select 
  saved_lookup_name,
  case substring(saved_lookups_id,1,1)
   when 0x00 then 'metadata'
   when 0x80 then 'system'
  end type
from saved_lookups

-- Oracle
select
  saved_lookup_name,
  case utl_raw.substr(saved_lookups_id,1,1)
    when hextoraw('00') then 'metadata'
    when hextoraw('80') then 'system'
  end type
from
  saved_lookups

Now, let’s talk about the binary format. There are two kind of formats: Unicode and non-Unicode formats. The binary format started as non-Unicode (single-byte) in the early days of Pivotal, but then (I don’t know exactly when) Pivotal realized that queries needed to support international characters as well and used a Unicode (double-byte) format to save the query. Your system could have them both! There are some queries that haven’t been touched/edited since they were first created which are still in non-Unicode formats. In the latest versions of Pivotal 5.x and 6.x all new queries are created in Unicode format, and those queries that are non-Unicode are changed to Unicode when modified.

The header

The type of format (Unicode and non-Unicode) can be determined by the reading the first 5 bytes:

  • 0x0100000000 : non-Unicode
  • 0x01000000FF : Unicode

Knowing whether is Unicode or not is important since we will need read the binary accordingly. The binary format has a kind of “header”, which depends on the type:

  • Header for non-Unicode queries: 0x0100000000 (length: 5)
  • Header for Unicode queries: 0x01000000FFFEFF00FFFEFF (length: 11)

As you can see the header size varies depending on whether is Unicode or not.

The length

After the header we have the length of bytes we need to read to get the text of the query. Again here we have two flavors: when the length can be stored in a single byte (i.e. the length is less than 255) then the length is stored in that same byte. If the length cannot be stored in a single byte then we will find 0xFF and then two bytes for the length. This is:

  • 0xXX: when the length is less than 255
  • 0xFFYYZZ: when the length is greater than or equal to 255

Here XX or YYZZ is a binary representation of an integer value for the length. But, there is another caveat (why make it easy?) In the case when the length is represented with 2 bytes, those bytes are stored as little-endian, which means that to get the integer value you need to invert YYZZ to ZZYY.

So for example, after reading the header (which can be 5 or 11 bytes depending on whether is Unicode or not) we need to read 1 or 3 more bytes, depending on whether the first byte is 0xFF or not,  to get the length. In case we read 1 byte we convert this to integer. If we read 3 bytes we use the second and third bytes inverted and convert that to integer. So we end up with this decision tree:

  • If it is Unicode, then skip 11 bytes for the header. For the length
    • If the 12th byte is 0xFF then read bytes 14th and 13th (inverted) and convert them to integer
    • If the 12th byte is not 0xFF then convert the 12th byte to integer.
  • If it is non-Unicode, then skip 5 bytes for the header. For the length
    • If the 6th byte is 0xFF then read bytes 8th and 7th (inverted) and convert them to integer
    • If the 6th byte is not 0xFF then convert the 7th byte to integer.

The start index

Lastly, we need to know where to start reading our text. As you might have guessed, it will depend on the above conditions. So we have the following:

  • If it is Unicode, then skip 11 bytes for the header. For the length
    • If the 12th byte is 0xFF we skip 3 bytes so our text starts at byte 15th
    • If the 12th byte is not 0xFF we skip 1 byte so our text starts at byte 13th
  • If it is non-Unicode, then skip 5 bytes for the header. For the length
    • If the 6th byte is 0xFF we skip 3 bytes so our text starts at byte 9th
    • If the 6th byte is not 0xFF we skip 1 byte so our text starts at byte 7th


Some examples:

Example 1: SQL_Tree_Binary = 0x01000000FFFEFF00FFFEFFFF7B042800….

The first 5 bytes are 0x01000000FF, so it is unicode. We skip 11 bytes to get the length. Byte 12th is 0xFF, meaning that we need bytes 13th and 14th to get the length. These bytes are 0x7B04. To convert them to integer we invert them to 0x047B. So the length is 1147. And the query starts at byte 15th.


Example 2: SQL_Tree_Binary = 0x01000000FFFEFF00FFFEFF352800….

The first 5 bytes are 0x01000000FF, so it is unicode. We skip 11 bytes to get the length. Byte 12th is not 0xFF, meaning that byte 12th has the length. We convert byte 0x35 to integer and we get that the length is 53. And the query starts at byte 13th.


Example 3: SQL_Tree_Binary = 0x0100000000FF5F052820….

The first 5 bytes are 0x0100000000, so it is not unicode. We skip 5 bytes to get the length. Byte 6th is 0xFF, meaning that we need bytes 7th and 8th to get the length. These bytes are 0x5F05. To convert them to integer we invert them to 0x055F. So the length is 1375. And the query starts at byte 9th.


Example 4: SQL_Tree_Binary = 0x0100000000F12820….

The first 5 bytes are 0x0100000000, so it is not unicode. We skip 5 bytes to get the length. Byte 6th is not 0xFF, meaning that byte 6th has the length. We convert byte 0xF1 to integer and we get that the length is 241. And the query starts at byte 7th.


This is not the whole story on the binary format of the query, but for now it is enough to get us what we want.


Getting the Query Text

Ok, now that we understand the structure is very simple to get the text (or is it?). From our analysis above we get two important pieces of information:

  • startIndex: where we can find the text of the query in all the binary
  • length: the length of bytes we need to consider to get the text

All is left is to get a substring of the binary value inside SQL_Tree_Binary field, starting at the startIndex and taking length bytes. In the case of a Unicode query we need to take length * 2 bytes (because Unicode is double-byte, remember?).

Both SQLServer and Oracle allow you to convert a binary/raw (respectively) datatype to varchar or nvarchar. I will explain how to do this later, but first let me point something to get a cleaner text:

The binary format used to represent the query is more complex than what’s explained here. The text of the query has two parts:

  • The first part is the text of the query as shown in the Toolkit in the query editor when we click the “Show SQL” button
  • The second part is a more structured form of the query, with comments for each of the clauses in the query editor.

As an example, take this query:

Sample query

here is the extracted text:

( 
    (
        EXISTS 
        (
            SELECT * 
            FROM   Company__Type 
            WHERE 
                Company__Type.Company_Type  = 5 AND 
                (Company__Type.Company_Id = Company.Company_Id)
        )
    ) AND 
    (Company.Inactive  = 0)
) 

WhereDelim,..,WhereDelim 

( 
    (
        /* Company  Type(Company).Company Type = Partner */  
        EXISTS 
        (
            SELECT * 
            FROM 
            (
                SELECT DISTINCT Company__Type.Company_Id   
                FROM   Company__Type   
                WHERE Company__Type.Company_Type  = 5
            ) SubqueryView   
            WHERE SubqueryView.Company_Id=Company.Company_Id
        )
    )   AND 
    (Company.Inactive  = 0)
)

(indentation and new lines added by me). Notice how the two parts are clearly separated by a special type of delimiter with the text WhereDelim,..,WhereDelim. The second part is closer to what Pivotal will actually send to the database, and it has also comments that correspond to the lines in the query designer, so it is easier to identify. We will only take the second part as the text of our query.


Extracting the Query Text in SQLServer

So, in SQLServer we can create the following view:

create view saved_lookups_text as
select
    table_id,
    table_name,
    saved_lookups_id,
    saved_lookup_name,    
    case
      when charindex('WhereDelim',query_text) > 0 then 
        substring(query_text,len(query_text) - charindex(reverse('WhereDelim'),reverse(query_text)) + 3,4000)
      else query_text
    end query_text
from (
    select
      table_id,
      table_name,
      saved_lookups_id,
      saved_lookup_name,
      case isUnicode
        when 1 then cast(substring(sql_tree_binary, startIndex, queryLength * 2) as nvarchar(4000))
        else cast(substring(sql_tree_binary, startIndex, queryLength) as varchar(8000))
      end query_text
    from (
        select
            saved_lookup_name, 
            case 
                when substring(sql_tree_binary,1,5) = 0x0100000000 then 0
                when substring(sql_tree_binary,1,5) = 0x01000000FF then 1
            end isUnicode,
            cast(case 
                when substring(sql_tree_binary,1,5) = 0x0100000000 then 
                    case 
                        when substring(sql_tree_binary,6,1) = 0xFF then substring(sql_tree_binary,8,1) + substring(sql_tree_binary,7,1)
                        else substring(sql_tree_binary,6,1)
                    end
                when substring(sql_tree_binary,1,5) = 0x01000000FF then 
                    case 
                        when substring(sql_tree_binary,12,1) = 0xFF then substring(sql_tree_binary,14,1) + substring(sql_tree_binary,13,1)
                        else substring(sql_tree_binary,12,1)
                    end
            end as int) queryLength,
            case 
                when substring(sql_tree_binary,1,5) = 0x0100000000 then 
                    case 
                        when substring(sql_tree_binary,6,1) = 0xFF then 9
                        else 7
                    end
                when substring(sql_tree_binary,1,5) = 0x01000000FF then 
                    case 
                        when substring(sql_tree_binary,12,1) = 0xFF then 15
                        else 13
                    end
            end    startIndex,    
            s.table_id,
            t.table_name,
            s.saved_lookups_id,   
            s.sql_tree_binary
        from 
          saved_lookups s inner join 
          tables t on s.table_id = t.tables_id
        where substring(saved_lookups_id,1,1) <> 0x00
    ) x
) y

(I know the query could be better written, but for the sake of demonstration is ok)

This query will present something like this (notice the query_text field):

Results from view with query text


Notice lines 8 and 9, where, due to the lack of a reverse charindex function in SQLServer, we need to do a trick to find the last occurrence of WhereDelim text and then take the text that comes after this. Notice also in lines 19 and 20 how we cast the text to the appropriate charset based on whether the text is Unicode or not.

You can download this SQL script from here.


Extracting the Query Text in Oracle

You guessed it! In Oracle things are not as easy. There are some caveats we need to consider:

  • Oracle has a limitation of 2000 characters when using nvarchar2 and 4000 characters when using varchar2 data types. So if our text is longer than this we will get an error when converting things using the package UTL_RAW.
  • The package used to read blobs in Oracle is DBMS_LOB. This package has a convenient SUBSTR function to get pieces of the BLOB (in our case the data in the SQL_Tree_Binary field). This function changes the endianness of the data when the data is Unicode (in other words, it inverts it) and thus when converting the Unicode text using UTL_RAW this gets confused and we get funky letters instead.
  • You cannot combine a Unicode charset with a non-Unicode charset in the same field using CASE statements

The view in Oracle is like this:

create view saved_lookups_text as
select
  saved_lookups_id,
  saved_lookup_name,
  case when unicode = 1
    then reverse(cast(utl_raw.cast_to_nvarchar2(utl_raw.reverse(dbms_lob.substr(sql_tree_binary,case when length > 1000 then 2000 else length * 2 end,queryStartIndex))) as varchar2(4000)))
    else utl_raw.cast_to_varchar2(dbms_lob.substr(sql_tree_binary,case when length > 2000 then 2000 else length end,queryStartIndex))
  end queryText,
  table_id,
  table_name
from
(
  select
    saved_lookup_name,
    dbms_lob.substr(sql_tree_binary,5,1) queryType,
    case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then dbms_lob.substr(sql_tree_binary,5,1)
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then dbms_lob.substr(sql_tree_binary,11,1)
    end prefix,
    case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then 0
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then 1
    end unicode,
    utl_raw.cast_to_binary_integer(case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then
           case
             when dbms_lob.substr(sql_tree_binary,1,6) = hextoraw('FF') then utl_raw.concat(dbms_lob.substr(sql_tree_binary,1,8),dbms_lob.substr(sql_tree_binary,1,7))
             else dbms_lob.substr(sql_tree_binary,1,6)
           end
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then
           case
             when dbms_lob.substr(sql_tree_binary,1,12) = hextoraw('FF') then utl_raw.concat(dbms_lob.substr(sql_tree_binary,1,14),dbms_lob.substr(sql_tree_binary,1,13))
             else dbms_lob.substr(sql_tree_binary,1,12)
           end
    end) length,
    case
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('0100000000') then
           case
             when dbms_lob.substr(sql_tree_binary,1,6) = hextoraw('FF') then 9
             else 7
           end
      when dbms_lob.substr(sql_tree_binary,5,1) = hextoraw('01000000FF') then
           case
             when dbms_lob.substr(sql_tree_binary,1,12) = hextoraw('FF') then 15
             else 13
           end
    end queryStartIndex,
    s.table_id,
    t.table_name,
    s.saved_lookups_id,
    s.sql_tree_binary
  from saved_lookups s inner join tables t on s.table_id = t.tables_id
  where
    utl_raw.substr(saved_lookups_id,1,1) = hextoraw('80') -- only business queries
  order by table_id,s.saved_lookup_name
)

Notice in line 6 how I do some tricks to overcome all the problems mentioned above. This query shows both parts of the query (doesn’t uses the WhereDelim mentioned above) but it is easy to add. Also, this view doesn’t include queries which text length is more than 4000 characters if non-Unicode or 2000 characters if Unicode, which unfortunately leaves a lot of query in the dark.

In Oracle I have created a package instead that gives me what I want in the query output window. With this package you can do things such as:

-- use a procedure to get the text of the query
declare
    queryText varchar2(8000);
begin
  pk_pivotal_toolkit.getQueryText('<some query name>',queryText);
  pk_pivotal_toolkit.printText(queryText);
end;

-- use a function to get the text of the query
declare
  queryText varchar2(8000);
begin
  queryText := pk_pivotal_toolkit.getQueryText('<some query name>');
  pk_pivotal_toolkit.printText(queryText);
end;

-- this will show only the first 4000 chars of a query
select s.*,pk_pivotal_toolkit.getQueryText(saved_lookup_name) query_text from saved_lookups s
where utl_raw.substr(saved_lookups_id,1,1) = hextoraw('80')


-- see what queries use a specific field:
call pk_pivotal_toolkit.findFieldsInQuery('Company.Type, Contact.First_Name');

You can download the whole package from here.


Conclusion

Now you can know what queries make use of certain field in the database!

I hope you find this useful. If you do please let me know. Also if you know a better way of doing this let me know. And if somebody from Pivotal reads this, please give us a better way of knowing how to get the text for a query.

3 Comments

When I design a new application I always design it with localization in mind. For cross platform applications my strategy is based on the use of resource files in a Portable Class Library (PCL) project. See my previous article on Internationalization of a Cross Platform Application for details on details about how to do localization in a cross platform application.

For localization I use the Multilingual App Toolkit (MAT). MAT was created to easy the task of localizing Windows Store and Windows Phone applications and since version 3.0 it could also be used in desktop applications (Asp.Net MVC, WinForms and WPF). But what about PCLs? Can MAT be used in PCLs? Well, if you’re using version 3.1 then yes, you can use MAT in PCLs. If you’re using version 3.0 or lower then yes, you can enable a PCL project for translation with MAT, and no, it doesn’t work as it is supposed to. Let’s show how to have MAT work in a PCL project and how to solve the compilation problem you get if your project was enabled with MAT 3.0.

Create a PCL project and enable MAT on it

(note: I’m using Visual Studio 2013 Update 2 and Multilingual App Toolkit 3.1)

Start Visual Studio and create a new “Class Library (Portable)” project and call it “Portable” (or whatever you want).

New PCL project

In the target frameworks dialog, add the targets you need. I will add the newest ones: “Windows Phone 8.1”, “Windows 8.1” and “.Net Framework 4.5”.

Choose target frameworks

The combination of target frameworks you choose will determine the kind of resources files you can add to the PCL project. There are basically two types of resource files:

  • .resx: This type of resource has a strong type associated to it which allows to use strings in code. It is used by Windows Phone Silverlight 8/8.1 and .Net Framework 4.5 targets.
  • .resw: This type does not have a strong type associated (although you can create one by using ResW File Code Generator). It is used by Windows 8/8.1 and by Windows Phone 8.1 targets.

When any of the selected targets uses .resx files then the PCL project will use .resx files. Otherwise the PCL project will use .resw files. In my case, since I chose “.Net Framework 4.5” as one of the targets my PCL project will use .resx resources. Having a strong type associated with my resource is very important later when using resources in the UI and app logic. (See my article on How to use Multilingual App Toolkit in Universal Apps for more information)

Now, delete the file Class1.cs generated by the Visual Studio wizard. The first thing you will be tempted to do is to enable MAT on the project. Select the project (not the solution) in Solution Explorer and from the Tools menu select “Enable Multilingual App Toolkit”. You will get the following message:

matNoResource

The message is pretty clear: we need to add a resource file before we can enable MAT. Let’s put our resource files in a folder called Resources. Right click on the project in Solution Explorer and from the popup menu select “Add” and then “New Folder” and call it Resources. New let’s add a resource file, right click on the newly created folder and from the popup menu select “Add” and then “New Item…”. The Visual Studio “Add New Item” dialog appears:

New resource

Select “Resource File” from the list of templates and name it Resources.resx. Add some strings to your resource file and make sure to that the “Access Modifier” for the resource is marked “Public” (cause we want to access to the strong type generated from other projects):

resource

Compile your project. Now we are ready to enable MAT on the project. Select the project (not the solution) in Solution Explorer and from the Tools menu select “Enable Multilingual App Toolkit”. When MAT is enabled by default the Pseudo Language file is added automatically to the project. You can remove this file (Resources.qps-ploc.xlf) if you want. Let’s add a new language.

Right click on the project in solution explorer and from the popup menu select “Add translation languages…”. MAT will open the “Translation Languages” dialog where you can select the languages your app will support. In my case I will add Spanish (es) and Italian (it). Click OK to close the dialog. MAT will add .xlf files to your project for each of the languages you chose. Your solution now should look something like this:

solution

Now, compile the project.

Solving the compilation error if using Multilingual App Toolkit prior to 3.1

If you’re not using the latest version of MAT (at the time of this writing is 3.1) then you will probably get the following error:

Error

If you enabled MAT and added languages to your project with a version of MAT prior to 3.1 then you will probably get the error above. If you installed MAT 3.1 new projects will be created fine, but old projects will still present the error. To fix this you will need to edit your project file. If you take a closer look to the build output you will find the following message: “No XLIFF language files were found.  The app will not contain any localized resources.” This means that MAT is not able to find the files with extension .xlf in the project. Right click on the project in Solution Explorer and from the popup menu choose “Unload Project”, and then again right click on the (now unloaded) project and select “Edit Portable.cproj”. Visual Studio will open the project file. Locate the following lines:

    <None Include="Resources\Resources.es.xlf">
      <Generator>XliffResxGenerator</Generator>
      <LastGenOutput>Resources.es.resx</LastGenOutput>
    </None>
    <None Include="Resources\Resources.it.xlf">
      <Generator>XliffResxGenerator</Generator>
      <LastGenOutput>Resources.it.resx</LastGenOutput>
    </None>

And change the node name from None to XliffResource:

    <XliffResource Include="Resources\Resources.es.xlf">
      <Generator>XliffResxGenerator</Generator>
      <LastGenOutput>Resources.es.resx</LastGenOutput>
    </XliffResource>
    <XliffResource Include="Resources\Resources.it.xlf">
      <Generator>XliffResxGenerator</Generator>
      <LastGenOutput>Resources.it.resx</LastGenOutput>
    </XliffResource>

Now, reload the project: right click on the project in Solution Explorer and from the popup menu select “Reload Project”. Now you should be able to compile the project.

Now I have my resource files in a PCL project, and since I’m using resx files I also have a public resources type that I can use in MVVM and databinding. See more details on how to do this in these two articles:

1 Comments

In a previous post (see How does Pivotal sends emails?) I explained the two methods Pivotal has to send an email: via SMTP or using the email client, depending on whether the email is being sent client-side or server-side and the SMTP options configured for system.

In this article I will provide more information on how to configure Pivotal to always send emails using SMTP, on the client and on the server side. It turns out that when you use the Pivotal Configuration Utility tool, when you set the server to use SMTP, the following information gets written on the registry:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Pivotal\Email]
"Protocol"="SMTP"
"Server"="smtp.company.com"
"From"="someemail@company.com"
"Reply To"="someemail@company.com"
"Username"=""
"Password"=""

(the key above is for x64 OS. If configuring an x86 OS then the key is [HKEY_LOCAL_MACHINE\SOFTWARE\Pivotal\Email])

You can just create a .reg file with the text above and write this info on any machine where Pivotal runs, and emails will be sent using SMTP. You can also create the above registry entries in a client machine and have the client send SMTP emails as well!

Ah, and this also works for Pivotal 5.9 server side!

So, stop using the Outlook profile to send emails and use SMTP instead.