4 Comments

You have probably come up with a scenario where you need to create a custom Visualforce page for a custom object and have that page available for both the browser and the Salesforce1 application. Right now there’s no easy way to tell Salesforce to use a specific page for the browser version and another different page for SF1. Let’s focus the problem from the UI/UX perspective: how to leverage the UI and best practices associated to each specific platform. So basically, if we use the Visualforce tags to build for the browser then the SF1 experience will not be the desired one (it will actually be very ugly!), and if we decide to not use Visualforce tags and instead use pure HTML5 (as recommended by the Salesforce1 practices) then the page will look good in SF1 but for browser users it will look different. We want to create a Visualforce page that will change its look depending on the client used to display it. Here’s an example

Same page for both browser and SF1

On the image you can see how the UI for each client is maintained. Also notice that in the case of SF1 we are using the standard publisher buttons to work with the page. Same page, same controller, but different UI and UX depending on the client. How we achieve this? Read on.

An Example

Let’s use an example to illustrate the problem. In our example we’re developing a questionnaire application: users are able to create questionnaires which will have many questions associated. This is an example of a questionnaire:

questionnaire

The questionnaire could be associated to any object, and we want to be able to answer a specific questionnaire by using both the browser client and the Salesforce1 app. Since the number of questions may vary, we need to build a Visualforce page that dynamically displays the questions associated to a specific questionnaire (we could not do this with a standard layout). Our Visualforce page will have a controller extension associated that will drive the logic of the page. We will associate a questionnaire to a Contact for this example.

Here is our controller extension:

public with sharing class ceContactQuestionnaire {
    private final ApexPages.StandardController controller;
    private final Contact contactRecord;
    
    public ceContactQuestionnaire(ApexPages.StandardController controller) {
        this.controller = controller;
        controller.addFields(new String[] {sObjectType.Contact.fields.Questionnaire__c.Name});        
        this.contactRecord = (Contact)controller.getRecord();
    }
    
    public String questionnaireName {
        get {
            if(questionnaireName == null && contactRecord.Questionnaire__c != null) {
                List<Questionnaire__c> q = [select Name from Questionnaire__c where Id = :contactRecord.Questionnaire__c];
                if(q.size() > 0) questionnaireName = q[0].Name;
            }
            return questionnaireName;
        }
        private set {
            questionnaireName = value;
        }
    }    

    public List<Contact_Question__c> questions {
        get {
            if(questions == null) {
                questions = new List<Contact_Question__c>();
                for(Contact_Question__c question : [select Id,Question__c,Answer__c from Contact_Question__c where Contact__c = :contactRecord.Id order by Order__c]) {
                    questions.add(question);
                }                    
            }
            return questions;
        }
        set {
            questions = value;
        }
    }
    
    public PageReference save() {
        upsert questions;
        return controller.save();
    }      
}

You can see is actually a pretty simple controller: the contact has a questionnaire associated and an auxiliary object with all the questions for the questionnaire. We get those questions and we make it available as a property. The extension also overrides the save method to save the answers given to those questions.

Let’s first design the page by using the Visualforce tags that we’ll use on a browser version of the page:

<apex:page standardController="Contact" extensions="ceContactQuestionnaire" >
    
    <apex:pageMessages />
    <apex:sectionHeader title="{!$ObjectType.Questionnaire__c.Label}: {!questionnaireName}" subtitle="{!Contact.Name}" />
    <apex:form >
        <apex:pageBlock >
            <apex:pageBlockButtons >
                <apex:commandButton action="{!save}" value="Save" styleClass="btn btn-default" />
                <apex:commandButton action="{!cancel}" value="Cancel" styleClass="btn btn-default" />
            </apex:pageBlockButtons>
            <apex:pageBlockSection columns="1">
                <apex:repeat var="question" value="{!questions}">
                    <apex:pageBlockSectionItem >
                        <apex:outputLabel value="{!question.Question__c}" />
                        <apex:inputTextarea cols="80" rows="3" value="{!question.Answer__c}"/>
                    </apex:pageBlockSectionItem>
                </apex:repeat>
            </apex:pageBlockSection>
        </apex:pageBlock>
    </apex:form>
</apex:page>

This is a pretty simple Visualforce page, we just render each question we get from the controller as a label for the question and a text area for the answer. This is how it looks on the browser:

rendering on browser

No surprises here: we’re using the familiar UI and look&feel of a Salesforce layout. Let’s make our Visualforce page available for SF1 (edit the page and select the option “Available for Salesforce mobile apps”) so we can open the page in SF1. You’ll notice that the UI and look&feel are pretty ugly:

rendering on SF1

If we present mobile users with this layout they will probably complain about it and not feel happy. So how do we fix this?

Determine what Client is Being Used

We need a way to know which client is using our Visualforce page so we can present the appropriate content and user experience. There is no official way to know this, but there are some tricks. Here I will use the parameters of the page to know whether I’m in a browser or in the SF1 app. We need a property in the controller extension:

public Boolean isSF1 {
    get {                    
        if(String.isNotBlank(ApexPages.currentPage().getParameters().get('sfdcIFrameHost')) ||
            String.isNotBlank(ApexPages.currentPage().getParameters().get('sfdcIFrameOrigin')) ||
            ApexPages.currentPage().getParameters().get('isdtp') == 'p1' ||
            (ApexPages.currentPage().getParameters().get('retURL') != null && ApexPages.currentPage().getParameters().get('retURL').contains('projectone') )
        ) {
            return true;
        }else{
            return false;
        }
    }
}

I will use this property to dynamically adjust the UI of the Visualforce page.

One Page for All

For the UI I will use a special version of Bootstrap for Salesforce1, which I have added as a static resource. This is how the page looks now:

<apex:page standardController="Contact" extensions="ceContactQuestionnaire" docType="html-5.0" tabStyle="Questionnaire__c">
    <apex:stylesheet value="{!URLFOR($Resource.Bootstrap_SF1,'/css/bootstrap-namespaced.min.css')}"/>
    
    <apex:outputPanel rendered="{!!isSF1}">
        <apex:pageMessages />
        <apex:sectionHeader title="{!$ObjectType.Questionnaire__c.Label}: {!questionnaireName}" subtitle="{!Contact.Name}" />
        <apex:form >
            <apex:pageBlock >
                <apex:pageBlockButtons >
                    <apex:commandButton action="{!save}" value="Save" styleClass="btn btn-default" />
                    <apex:commandButton action="{!cancel}" value="Cancel" styleClass="btn btn-default" />
                </apex:pageBlockButtons>
                <apex:pageBlockSection columns="1">
                    <apex:repeat var="question" value="{!questions}">
                        <apex:pageBlockSectionItem >
                            <apex:outputLabel value="{!question.Question__c}" />
                            <apex:inputTextarea cols="80" rows="3" value="{!question.Answer__c}"/>
                        </apex:pageBlockSectionItem>
                    </apex:repeat>
                </apex:pageBlockSection>
            </apex:pageBlock>
        </apex:form>
    </apex:outputPanel>
    
    <apex:outputPanel rendered="{!isSF1}">
        <div class="bootstrap" style="padding: 10px;">
            <h1>{!$ObjectType.Questionnaire__c.Label}: {!questionnaireName} <small>{!Contact.Name}</small></h1>
            <apex:form >
                <apex:repeat var="question" value="{!questions}">
                    <div class="form-group">
                        <label>{!question.Question__c}</label>
                        <apex:inputTextarea value="{!question.Answer__c}" rows="3" cols="80" styleClass="form-control"/>
                    </div>
                </apex:repeat>
                <span id="submit"><apex:commandButton action="{!save}" value="Save" styleClass="btn btn-default" /></span>
                <span id="cancel"><apex:commandButton action="{!cancel}" value="Cancel" styleClass="btn btn-default" /></span>                            
            </apex:form>
        </div>
        
    </apex:outputPanel>
</apex:page>

Notice on line 2 how I use the static resource for the Bootstrap for SF1 CSS library. Lines 4 and 25 create two outputPanels that will be rendered based on the property we created in the controller extension to determine if the page is being called from SF1. The first output panel will be rendered for the browser client, and the code and layout is the same we had before. The new part is the second output panel, which will be rendered for the SF1 app. Notice that the code for the SF1 layout doesn’t use page blocks or any of the standard layout tags, but instead uses standard HTML div tags to layout the content. Now the page in the browser still looks as before, but on SF1 it now looks like this:

rendering on FS1 using Bootstrap

Definitely better isn’t? But there’s still something that doesn’t fit right into the SF1 experience: the publisher bar. Notice that we still have two buttons for saving and canceling, and that save button on the publisher bar appears disabled. We need to take advantage of the publisher bar and provide a seamless user experience to our SF1 users.

Taking Care of the Publisher Bar

Using the publisher bar is pretty easy once you know what to do, of course. Here is the modified section of the page for the SF1 part:

<apex:outputPanel rendered="{!isSF1}">
    <div class="bootstrap" style="padding: 10px;">
        <h1>{!$ObjectType.Questionnaire__c.Label}: {!questionnaireName} <small>{!Contact.Name}</small></h1>
        <apex:form >
            <apex:repeat var="question" value="{!questions}">
                <div class="form-group">
                    <label>{!question.Question__c}</label>
                    <apex:inputTextarea value="{!question.Answer__c}" rows="3" cols="80" styleClass="form-control"/>
                </div>
            </apex:repeat>
            <apex:actionFunction action="{!save}" name="saveForm" />                
        </apex:form>
    </div>
    
    <script src='/canvas/sdk/js/publisher.js'></script>

    <script>
        if(sforce.one) {                       
            Sfdc.canvas.publisher.subscribe({name: "publisher.showPanel", onData:function(e) { 
                Sfdc.canvas.publisher.publish({name: "publisher.setValidForSubmit", payload: "true"});                                       
            }});
            Sfdc.canvas.publisher.subscribe({name: "publisher.post", onData:function(e) {
                saveForm();
                Sfdc.canvas.publisher.publish( { name: "publisher.close", payload:{refresh:"false"}}); 
            }});            
        }
    </script>
    
</apex:outputPanel>

To work with the publisher bar we need to reference the publisher API (this is provided by the Salesforce platform). You can see we add a reference to the library in line 15. In line 18 we use the sforce.one property to know whether we are in SF1 or not (although this section will not be rendered if we’re not in SF1, it is a good practice to put this check). Notice that we removed the two buttons we had in the previous version for the save and cancel, as we don’t need them anymore since we will use the buttons on the publisher bar instead.

The publisher API works with a publisher/subscriber pattern: it exposes some events that we could subscribe to, or publish, as needed. In line 19 we subscribe to the publisher.showPanel event, which will get fired when the page is opened and finished rendering, in which case we need to enable the save button on the bar. We can tell the publisher API to enable the save button by firing the publisher.setValidForSubmit event.

In line 22 we subscribe to the publisher.post event, which gets fired when the user taps on the save button on the publisher bar. In this case we need to save the form, by calling the action function defined in line 11, which will call the save action on the controller extension. And finally we publish the publisher.close event to notify the API to dismiss the current page.

This is how the SF1 page looks now with the publisher bar enabled:

rendering on FS1 using the publisher bar

Users can now answer questions and when done tap on the save button to save their changes, and this will all work with the UI/UX of a SF1 application.

1 Comments

Microsoft recently announced Azure App Service, a new Azure service which integrates Web Apps, Mobile Apps, Logic Apps and API Apps in one service. Scott Guthrie’s blog has a very good article that explains this new service.

In this article I will talk about how to use Azure App Services to integrate with Salesforce. For this example I will develop a small solution based on a real case scenario. I will assume the reader is more familiarized with Microsoft’s technology so I will spend more time explaining the details about Salesforce.

The Scenario

The scenario is the following: suppose that our client is an insurance company that uses Salesforce as its CRM. Insured's call the Call Center department to requests assistances depending on the policy coverage they have contracted, for example: assistance on the road (for automotive policies), or request for professional assistance - e.g. a plumber, locksmith, etc. – at home (for home insurance policies). All assistances are attended by third party service providers (car shops, locksmiths, car towing companies, etc.) The idea is that service requests are made in the CRM, and then third party service providers can attend these requests and mark them as completed once the service has been delivered. It would be ideal that external providers could have access to the CRM to see all the pending requests and do updates directly in the system. However, in our case it won’t be that easy: there are many service providers and we don’t want to buy Salesforce licenses for them (Salesforce licenses are not precisely cheap), and besides, we don’t want external users to have access to our CRM. You could create a Community for this and be done with it, but in this case, and for the sake of illustration, I will instead show how you can develop a small web portal for service providers which we will synchronize with CRM using Azure App Services.

Our solution will have the following flow:

  1. Call Center users will receive assistance requests by insured’s. These requests will be entered in the CRM (Salesforce)
  2. At regular intervals, the Logic App will get from the CRM all pending requests using a Salesforce Connector. These requests will be sent to the database of the service provider web portal using a SQL Server connector
  3. New requests will appear in the service provider portal
  4. Service providers will take requests and deliver them. Once delivered, they will change the request’s status to completed
  5. At regular intervals the Logic App will get all completed requests and synchronize them back to the CRM using a Salesforce Connector
  6. Finally, CRM users will be able to see all the external service provider deliveries completed

scenario

In the diagram, we can see that we will require a bidirectional synchronization between Salesforce and the web app we will create.

It is worth noting that the goal of this article is to describe how to use Azure App Service to do integrations with Salesforce. The solution we will develop in this article does not pretend to be a complete solution, but a simple solution that will allow us to see the most important aspects to consider when doing these kind of integrations. A complete solution would probable have a better state workflow associated to requests, notifications (email, SMS and/or push) of new assignments to third party providers, notifications to insured’s of the status of their requests, etc. To maintain things simple, and the article not as long, we will focus on the most important aspects and leave some of the details of a real solution as homework.

Requirements

In order to follow along and create the solution we will need:

  • An Azure subscription. If you don’t have one you can get one free trial from the Azure web site
  • A Salesforce development environment. You can get a free Salesforce Developer instance from the Salesforce Developer’s web site. When you register you will get an email with information about how to connect to the instance.
  • Visual Studio 2013 Community Edition (Update 4 at the time if this writing). You can download it free from the Visual Studio web site. You will also need the Azure SDK (version 2.5.1 at the this of this writing).


Development of the Salesforce Application

The first thing we’ll do is creating the necessary objects in our Salesforce developer instance. If you requested a new development instance and is the first time you get into Salesforce you will directly enter the Salesforce setup area. From here we can customize the system and create new objects. In our case we want an object called Assistance.

salesforce create objectFrom the menu on the left, we will go the the “Build” section, then “Create”, and then “Objects”. You can also use the search box on the top and enter a keyword to filter the options available on the menu. On the “Custom Objects” screen click the “New Custom Object” button. On the screen to create the object fill the required data as shown in the following screen. For the rest of the information leave the default value.


salesforce new custom object


Notice the “Object Name” is Assistance. Every object in Salesforce has an API Name, which is the name that we will use to identify the object in development. As a rule, Salesforce adds the __c suffix to the name of all custom objects. So, the real name of our newly created object is Assistance__c.

Click on the “Save” button to create the object. In Salesforce objects are immediately available. We will proceed to create some fields to get the data required for an assistance. On the same screen of the object, on the “Custom Fields & Relationships” section, click the “New” button and launch the new field wizard. We will first create the relationship between assistances and contacts: a contact could have many assistances and an assistance belongs to a unique contact. Salesforce comes out-of-the-box with an object called Contact (here the name doesn’t have the __c suffix as this is a standard object and not a custom object, all out-of-the-box objects are standard objects). To create the relationship between assistance and contact we need to create a field of type “Lookup Relationship”, we will choose this option on the wizard screen and click “Next”. On the next step, in the “Related To” field choose the Contact object. On the next step specify Contact as the name of the field, and leave the default values for the rest.

salesforce new field


Click on “Next” and leave the default options for the rest of the steps and finalize the wizard clicking on “Save & New” to restart the wizard to create a new field. We will repeat the wizard to create all the fields described below:

Data TypeLabelNameObservations
Date Date Date In the field “Default Value” enter the function TODAY()
Text Area (Long) Description Description
Text Provider Provider Enter 50 in the field “Length
Picklist Status Status

In the field “Enter values for the picklist…” enter the following values:

New
Completed

Mark the field “Use first value as default value

Checkbox Synced Synced


After you finish creating the fields, these should look as follows:

salesforce assistance fields

Note that, as with the name of the custom object, the names of custom fields have also the __c suffix. Every object in Salesforce, custom or standard, has two fields called Id and Name. The first one is the primary key of the object, and the second is a text that works as the descriptor for the record. Both fields do not have the __c suffix, as they’re considered standard fields (even in custom objects).

Test the Application by Creating a Few Records

We already have our object for assistances. To test it we will create a new assistance. From the top menu choose the tab for “Contacts” to see a list of recent contacts (if it is the first time we use this tab, we will not have any recent contact and the list is empty). To see the list of all contacts, from the “View” dropdown select the “All Contacts” list and click the “Go” button. Select any of the contacts in the list to see his/her details. At the end of the form we should see the section for “Assistances”. Click on the “New Assistance” button to create a new assistance record. Notice how some of the fields are automatically filled based on default values: the Contact field gets filled since we created the record from a contact record, the Date field has today’s date as a default, and the Status is set to New as is the first value introduced in the picklist. Enter a name and description and save the assistance.

selesforce new assistance

Create one or two more assistances so we have enough data for the rest of the tests that will follow.

If we go back to the contact record we should see the requests we just created. However, in the list we only see the name of the request. To see additional fields we need to modify the form. At the top of the page there is a link called “Edit Layout” which opens the form editor.

salesforce edit layoutOn the edit layout screen look at the end for the list of assistances and with the mouse hover on the title of the list until a wrench icon appears. When clicking this icon you will see a screen where you can select the fields you want to display on the list. Select fields Date, Status and Provider and then click the “OK” button. To save the layout click the “Save” button at the top of the layout editor, returning back to the contact record. You will now be a able to see more details in the list of assistances:

salesforce assistances


Development of the External Service Provider’s Portal

We will now proceed with the creation of the provider’s portal. For this we will create an MVC application using Visual Studio, and later publish it in Azure as a Web App with a SQL Server database.

Open Visual Studio and create a new project of type “ASP.Net Web Application”. I have called mine ProvidersPortal. Use the MVC project template and choose “Individual User Accounts” as the authentication method. Once the project is created we will enable Entity Framework Migrations. From the Package Manager console enter the following commands:

enable-migrations
add-migration Initial
update-database

Next, we will create a new entity to represent assistances. In the Models directory, add a new class and call the file Assistance.cs and overwrite the content of the file with the following code:

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
using System.Web;
 
namespace ProvidersPortal.Models
{
    public class Assistance
    {
        [MaxLength(20)]
        [Key]
        public string Id { get; set; }
 
        [MaxLength(80)]
        public string Name { get; set; }
 
        public string Description { get; set; }
 
        [MaxLength(50)]
        public string Provider { get; set; }
 
        [MaxLength(20)]
        public string Status { get; set; }
 
        public DateTime? Date { get; set; }
 
        public bool? Synced { get; set;}
    }
}

To expose this object to Entity Framework we need to add a property in the class for the context. To keep things simple we will use the same database used by ASP.Net Identity. Open the file Models\IdentityModels.cs and add the highlighted line to the class ApplicationDbContext:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    public ApplicationDbContext()
        : base("DefaultConnection", throwIfV1Schema: false)
    {
    }
 
    public static ApplicationDbContext Create()
    {
        return new ApplicationDbContext();
    }
 
    public DbSet<Assistance> Assistances { get; set; }
}

Next, add a new migration for the new object. In the Package Manager console enter the following commands:

add-migration Assistance
update-database

Now, we need a controller and a view for the assistance object. On the Controllers folder, add a new controller of type “MVC 5 Controller with views, using Entity Framework”. Fill the data as shown in the following figure:

vs add controller

Visual Studio will create (scaffold) files for the controller and the views for the typical CRUD operations. By default the controller is created allowing anonymous access to it, and in our case we want only authenticated users (external service providers) to have access to the assistance requests. Open the controller, the file Controllers\AssistanceController.cs and add the Authorize attribute to it:

[Authorize]
public class AssistancesController : Controller

Now all we have left is to add a menu option to access the assistances. Open the file Views\Shared\_Layout.cshtml and add a new option to the menu, as shown in the following highlighted code:

<ul class="nav navbar-nav">
    <li>@Html.ActionLink("Home", "Index", "Home")</li>
    <li>@Html.ActionLink("Assistances", "Index", "Assistances")</li>
    <li>@Html.ActionLink("About", "About", "Home")</li>
    <li>@Html.ActionLink("Contact", "Contact", "Home")</li>
</ul>

Compile and execute the application. After registering you should be able to see the assistances screen (which will be empty at the moment):

providers portal


Publishing of the Portal as a Web App

Now we’ll proceed to publishing the portal as a Web App in Azure. We will let Visual Studio do all the work for us. In the “Solution Explorer”, on the project node, right click and choose “Publish…”. Select “Microsoft Azure Web Apps” as the target. Visual Studio will ask you for credentials to connect to your Azure subscription. Click on the “New…” button to create a new application. I have called mine ProvidersPortal, and to keep things well organized I’ve decided to create a App Service plan as well as a new resource group for this application. We also need a database, so select the option to create a new database server. Click the “Create” button to let Visual Studio create the app in Azure.

azure web app creation

Write down the name of the App Service plan and the resource group, you will need these when you create the rest of the applications (Logic Apps and API Apps) as they need to have the same plan and group so they can see each other. In my case, both the plan and the group is called ProvidersPortal.

Once the web application is created in Azure, the wizard will continue with the publishing. In the “Settings” section make sure you select “Execute Code First Migrations (runs on application start)” in order to provision the Azure database with the schema required by our application.

vs publish wizard

Finally, click the “Publish” button and when the wizard finishes will open the provider’s portal executing on the cloud! Register a user and browse the Assistances menu item so Entity Framework will create the database schemas for the app.


Integration using a Logic App

We already have our two applications executing independently. Now comes the fun part: the integration of both applications using a Logic App in Azure. We will use the Azure portal to do all the steps that follow.

First, we will create the API Apps that will allow us to connect to Salesforce and SQL Server.

Creation of API Apps

We need two API Apps, one for Salesforce and the other for SQL Server. Both apps will be created from the Azure Marketplace.

Salesforce API App

salesforce connectorOn the Azure portal, click on “New”, select “Web + Mobile” and then “Azure Marketplace”. Select the “API Apps” category. On the search field enter “salesforce” and press enter. From the search results click on “Salesforce Connector”. A blade will open with the details of the connector and with links to its documentation. Click on the “Create” button. A new blade will open with the properties of the connector. From here select the “Package Settings” section. In this section you can specify the Salesfoce objects you would like to access. Remove the objects that appear by default and enter the name of the object we created in Salesforce: Assistance__c (remember the __c suffix, which is part of the name):

salesforce connector entities


salesforce connector propertiesIn the section “App Service Plan” select the plan created before when publishing the web application. The service plan must be the same so all the parts can work together. In my case the service plan is called ProvidersPortal. Click on “Create” and let Azure provision the API App. After a minute approximately you will see a notification telling the app is ready.

By the way, if you want to modify the connector configuration parameters later, you need to do it from the Host of the API App (you have a link to the host on the connector details blade)


SQL Server API App

sqlserver connectorAs we did before, from the Azure Marketplace search for “sql” and in the search results click on “Microsoft SQL Connector”. A blade will open with the connector details and a link to its documentation. Click the “Create” button to open a new blade with the configuration of the connector. From here select “Package Settings” section. In this section you can specify the connection details to the database. If you have doubts about what are the right values to put here you can look at the connection string that was created in the publishing profile of the web project when we published it to Azure (in Visual Studio, in Solution Explorer, go to Properties\PublishProfiles\<projectname>.pubxml). For the “Tables” specify Assistances. Make sure to select the same service plan we’ve been using. This is how the properties look in my case:

sql connector propertiesClick “Create” and let Azure provision the API App. After a minute approximately you will see a notification telling the app is ready. As with the previous API app, if you need to modify the configuration of the connector (because they changed or because you made a mistake) you will have to do it from the Host of the API App.



Creation of the Logic App

Let’s stop here for a moment and analyze what we want to accomplish. We want a bidirectional synchronization between Salesforce and the SQL Server database.

For the synchronization from Salesforce to SQL Server we want to do the following:

  • First we find all those assistances that are in Salesforce which have not been synced and with a status of “New”
  • For each one of these assistances we get its details
  • For each one of these assistances we will add a new record in the provider portal’s database
  • If the insertion is successful we mark the assistance as synced in Salesforce

For the synchronization from SQL Server to Salesforce we want to do the following:

  • We will look for all the assistance records that have been completed and have not been synced yet
  • For each one of these records we will update Salesforce
  • If the update is successful we will mark the record as synced in the portal

Since (for the moment) the logic inside Logic Apps is lineal (the current step always depends on the previous) we will need to create two Logic Apps, one for each direction of the synchronization.

Note: in this case we will use a boolean field to know what records need to be synced from one system to the other. In a real application this is probably not the best since each record will be synced only once between systems. A better approach would be to base the synchronization on a timestamp field that has the last modification date made to a record.

Synchronization from Salesforce to SQL Server

logic appWe will now proceed to create the synchronization between both applications. For this we will create a Logic App and use the connectors created before. On the Azure portal, click on “New”, then on “Web + Mobile” and select “Logic App”. A new blade will open where you can enter the name of the app and select the service plan. Again, make sure to select the same service plan specified in the connectors. I called my logic app ProvidersPortalLogic and selected the ProvidersPortal plan. Click on “Create” to provision the application. After a minute approximately you will see a notification telling the app is ready.

Open the application and select the section “triggers and actions” to show the canvas to edit the actions for the app. For now, and to be able to develop and test our app, click on the option “Run this logic manually” which will allow us to run the application on demand as we develop it. On the right section of the edition canvas you should see, among others, the two API applications we created in previous steps: the Salesforce and the SQL Server connectors. If you don’t see the connectors you probably didn’t select the appropriate service plans to be all the same.

Let’s start by adding the first step. On the “API Apps” panel click on the Salesforce connector. This will add a step to the logic. You need to authorize, i.e. provide the connection credentials, the connector so it can connect to the Salesforce instance. Click on the “Authorize” button, a screen requesting Salesforce credentials will appear. Enter your credentials and when asked if you want to give access to the Azure Logic App say yes. If everything went ok the connector will get the metadata from Salesforce and will present you with some options:

logic app salesforce connector

It is possible that you get an error saying the the metadata has not been generate (“Error fetching swagger api definition”). You could try remove the step (by clicking on the wrench icon in the upper right menu of the step and selecting “Delete this action”) and try again. If you still get the same error then you can do the following trick: change the editor to the the code view by clicking on the “Code View” icon code view and remove the Salesforce subscription (the one highlighted on the following image) and then try to add the step again.

salesforce subscription in code view

If the above trick doesn’t fix the problem the you probably have an error in the configuration of the API App (the connector).

From the step actions click on three dots “” to get more actions. Select the “Execute Query” action. This action allows you to execute a SOQL query against Salesforce. Enter the following query:

select Id from Assistance__c where Synced__c = false and Status__c = 'New'

We want the id of the records that have not yet been synchronized and with a status of “New”. When we validate the step we should see something like this.

salesforce execute query step

When this step gets executed we will get all the assistance records that satisfy the criteria, and they will be stored in a property called result. Next we will add the second step: get the details of each assistance.

Add a new step by clicking on the Salesforce connector in the “API Apps” panel. Select “Get_Assistance__c” as the action. This action will get all the fields of a particular record. We want this step to be executed by each of the records obtained in the previous step. To achieve this, on the step menu, click on the wrench icon and select “Repeat over a list”. A new field “Repeat” will appear allowing you to specify the list to use to get the records. We will use an expression to get the list. Expressions are specified by adding the @ character as a prefix. If we want to get the list of records from the previous step we will use the following expression:

@body(‘salesforceconnector’).result.records

The expression means the following: get the body of the step called salesforceconnector and search for a property called result.records. The body is formatted as JSON and the result property is an object which contains an array property called records, containing all the records returned by the query.

By default, the name of each step is the name of the connector plus an index, so for example, the first step of the Salesforce connector is called salesforceconnector, the second step is called salesforceconnector0, the third is salesforceconnector1, and so on.

In the field “Record Id” we will enter the id of the current record in the loop. To get the current record inside the loop we will use the @repeatItem() expression. So, if we want the id of the record we will use:

@repeatItem().Id

The step should look like this:salesforce connector get action

Note: we could not add this second step to the logic and instead get the required data by specifying it on the query of the first step, but I’ve done it this way to illustrate the concepts.

Now that we have the data for each record the third step is to insert the record in the database of the web application. For this we will use the SQL Server connector. On the “API Apps” panel click on “Microsoft SQL Connector”. This will add a new step to the logic. Select the action “Insert into Assistances (JSON)”. The connector will ask for parameters for each field in the table. As with the previous step, we want this insertion to be made for each record in the list so click on the wrench icon and select “Repeat over a list”. In this case we get the list using the following expression:

@actions('salesforceconnector0').outputs.repeatItems

salesforceconnector0 is the previous step, from which we want to take the outputs property representing the output of the step (this output is actually many records since the step was also executed for a list). From this output we want the repeatItems property, which is an array with the result of each record it got in the loop. To get a specific field we will use again the expression @repeatItem():

@repeatItem().outputs.body.Id

The above expression would get the field Id from the Salesforce record. If this looks complex to you the best thing is for you to execute the application every time you add a step and analyze the output of each step to see the JSON produced. This will give you an idea on how to get the values you need.

We will repeat the above expression for each field of the table. Remember once again that, because we’re dealing with custom fields, you need to add the __c suffix to the field name, except for the Id and Name fields which are standard fields (even though they belong to a custom object). Fill the fields using the following table:

FieldExpression
Id@repeatItem().outputs.body.Id
Name@repeatItem().outputs.body.Name
Description@repeatItem().outputs.body.Description__c
Provider
Status@repeatItem().outputs.body.Status__c
Date**
Synced

** The date field should have the following expression: @repeatItem().outputs.body.Date__c, however this value was giving me errors when trying to insert the record in SQL Server, something related with the date format. After trying unsuccessfully with various formats I decided to leave it blank at the moment and later find a solution.

We will leave fields Provider and Synced blank as these will be filled in the provider’s portal. The step should look as follows:

sql connector step

The last step would be to update the record in Salesforce and mark it as synced so the record would not be considered in subsequent runs of the logic app. Add a new step to the logic by clicking on the Salesforce connector in the “API Apps” panel. For this step select the action “Update Assistance__c”. Make the step execute for a list (select “Repeat over a list” from the step menu). The parameters for fields “Repeat” and “Record Id” are the same as in the previous step. Set the Synced field to true.

salesforce connector update

We have our logic app ready! Make sure to save changes if you haven’t done it yet and close the editor (you could get a message asking if you want to discard your changes even if you have saved them, say yes, your changes are already saved). Execute the application by clicking on the run icon run now. If everything works ok you should be able to see the results of the execution:

execution results

You should also be able to see the assistances you created in Salesforce in the provider’s portal:

providers porta with records

And records in Salesforce should be marked as synced:

salesforce synced

With this we have finished the first part of our solution.


Synchronizing from SQL Server to Salesforce

For the second part of the solution we will create a new Logic App. The steps are the same as the ones we did for the previous app. I’ll just mention that I will call this app SalesforceLogic. Remember to assign the same service plan. To execute this app on demand make sure you select the “Run this logic manually”.

Let’s start with the first step. Add a SQL Server connector to the app and select the action “Select from Assistances (JSON)”. Leave “Fields“ blank to get all the fields of the table. In the “Where” add the following condition:

Status = 'Completed' and Synced is null

The step should look like this:

sql server select step

Now add a second step by clicking on the Salesforce connector. Since this is a new logic app you need to authorize it against the Salesforce instance again. Select the action “Update Assistance__c”. We want to execute the update in Salesforce for each one of the records retrieved in the previous step, so we need to execute this step for a list: select the “Repeat over a list” from the step menu. For the list we will use the body of the previous step with the expression @body(‘microsoftsqlconnector’), and for the “Record Id” use the expression @repeatItem().Id. We need to update the fields Provider__c and Status__c in Salesforce by using the expressions @repeatItem().Provider and @repeatItem().Status, respectively. The step should look as follows:

salesforce connector update 2

Finally, we will add a third step using the SQL Server connector to update the Synced field in the portal’s database so it will not be considered when the logic app is executed again. For this we will make this step run for the same list of the first step: @body(‘microsoftsqlconnector’) and set the Synced field to true. This step requires a “Where” condition which needs to be dynamic using the following expression:

@concat('Id = %27',repeatItem().Id,'%27')

Here the trick is to use the URL codification of the ' character (simple quote) using the code '%27'. The reason to do it like this is because we cannot use a simple quote inside a text (even though I have tried different escape sequences). This will produce a condition such as this: Id = 'xxxxxxx'.

sql connector sync

We have our logic ready! Make sure to save changes and close the editor.

On the provider’s portal select some of the assistances and fill the Provider field and change the Status to Completed.

complete an assistance

Execute the application by clicking on the run icon run now. If everything works ok you should be able to see the records in Salesforce updated:

completed assistances in salesforce

Also you should see records marked as synced in the portal’s database:

synced assistances in sql server


Automatic Execution

recurrenceSo far we have executed both Logic Apps manually on demand. On a real application we will execute both applications automatically at regular intervals. For this we need to uncheck the option “Run this logic manually” and use some application with a trigger. The Salesforce connector doesn’t provide any trigger, but the SQL Server connector does. In both cases one option is to use the “Recurrence” application, which will allow use to automatically execute the application using a predefined time interval. The only inconvenience of this is that, if you use the free plan, the minimum time interval you can specify is one hour (one of the reasons why we chose to run the logic on demand for this article).


Conclusions

As we’ve seen, Azure App Services is a powerful tool. With a great amount of connectors (which is rapidly growing) and with the escalation possibilities that offers Azure, it is very easy to create solutions such as the one created in this article. Definitely a tool that any Software Architect should have in her tool belt when designing integrations solutions.

It is true that some areas could be improved. Documentation lacks some details, and there are some minor errors when designing the application logic, but I’m pretty sure this will evolve and get fixed (if not already) as time goes by. Also, the logic of an App Logic is very lineal, and although you can get around some scenarios, this limits some other scenarios, but again, I’m pretty sure this will change later as they will add more options to it.

So, things can only get better, so I will be following this service and see how it evolves from here.


Resources


You can also see this article in Spanish in the Microsoft MSDN blog for Spain:

0 Comments

In a previous post (Showing System Fields in a Custom Visualforce Page) I talked about how to show system information (Created By, Last Modified By, and Owner) in a custom visualforce page. The approach I presented works but it has some shortcomings:

  • Doesn’t work as well when the object is a detail object in a Master-Detail relationship (there is no owner here for the detail object, as the owner is the one from the master object)
  • It requires to put some fields (as hidden) in the visualforce page.

I present here a better approach that takes into consideration the above points.

Let’s start by the component controller:

public class ComponentControllerSystemInformation {
    public sObject recordValue {        
           get {
               return recordValue;
           }
           set {
               recordValue = value;
               
               Schema.DescribeSObjectResult d = recordValue.getSObjectType().getDescribe();
            String soql = 'select CreatedById,CreatedDate,LastModifiedById,LastModifiedDate' + (isMaster?',OwnerId':'') + ' from ' + d.Name + ' where Id = \'' + recordValue.Id + '\'';
            recordValue = Database.query(soql);
           }
    }
     
    public Id recordId {
        get {
            return recordValue.Id;
        }
    } 
        
    public Id createdById {
        get {
            return (Id)recordValue.get('CreatedById');
        }
    } 
     
    public String createdByName {
        get {
            User createdByUser = [select name from user where id = :createdById limit 1];
            return createdByUser == null ? null : createdByUser.Name;
        }
    }
         
    public String convertedCreatedDate {
        get {
            DateTime createdDate = (DateTime)recordValue.get('CreatedDate');
            return createdDate.format();
        }
    }
     
    public Id lastModifiedById {
        get {
            return (Id)recordValue.get('LastModifiedById');
        }
    } 
     
    public String lastModifiedByName {
        get {
            User lastModifiedByUser = [select name from user where id = :lastModifiedById limit 1];
            return lastModifiedByUser == null ? null : lastModifiedByUser.Name;
        }
    }
         
    public String convertedLastModifiedDate {
        get {
            DateTime lastModifiedDate = (DateTime)recordValue.get('LastModifiedDate');
            return lastModifiedDate.format();
        }
    }
         
    public String ownerName {
        get {
            User ownerUser = [select name from user where id = :ownerId limit 1];
            return ownerUser == null ? null : ownerUser.Name;
        }
    }
     
    public String ownerPhoto {
        get {
            try {
                Id ownerId = (Id)recordValue.get('ownerId');
                User owner = [select smallphotourl from user where id = :ownerId limit 1];
                return owner == null ? null : owner.SmallPhotoUrl;
            } catch(System.RequiredFeatureMissingException e) {
                System.debug('Chatter not enabled in organization:' + e.getMessage());
                return null;
            }
        }
    }
    
    public Boolean isMaster {
      get {
        Schema.DescribeSObjectResult d = recordValue.getSObjectType().getDescribe();
        Map<String, Schema.SObjectField> fields = d.fields.getMap();
        Schema.SObjectField field = fields.get('OwnerId');
        return field != null;
      }
    }
    
    public Id ownerId {
      get {
        if(!isMaster) return null;
        
        return (Id)recordValue.get('OwnerId');
      }
    }             
}

Notice lines 9-11 where I query the fields that are required by this component to work, removing the requirement I had in the previous solution to include those fields in the visualforce markup. Notice also in line 81 a new property isMaster which we will use to show or hide the owner information in the visualforce page.

The component can now be written as follows:

<apex:component controller="ComponentControllerSystemInformation">
    <apex:attribute name="record" assignTo="{!recordValue}"
        type="sObject" description="The object for which to display system information" required="true"/>
     
    <apex:pageBlockSection title="{!$Label.SystemInformation}" columns="2">
        <apex:pageBlockSectionItem >
            <apex:outputLabel value="{!$Label.LastModifiedBy}" />
            <apex:outputPanel >
                <apex:outputLink id="lastModifiedBy"
                    onblur="LookupHoverDetail.getHover('{!$Component.lastModifiedBy}').hide();"
                    onfocus="LookupHoverDetail.getHover('{!$Component.lastModifiedBy}', '/{!lastModifiedById}/m?retURL={!URLENCODE($CurrentPage.Url)}&isAjaxRequest=1').show();"
                    onmouseout="LookupHoverDetail.getHover('{!$Component.lastModifiedBy}').hide();"
                    onmouseover="LookupHoverDetail.getHover('{!$Component.lastModifiedBy}', '/{!lastModifiedById}/m?retURL={!URLENCODE($CurrentPage.Url)}&isAjaxRequest=1').show();"                      
                 value="{!URLFOR('/' + lastModifiedById)}">{!lastModifiedByName}</apex:outputLink>&nbsp;
                <apex:outputText value="{!convertedLastModifiedDate}" />                                                                        
            </apex:outputPanel>
        </apex:pageBlockSectionItem>
        <apex:pageBlockSectionItem rendered="{!NOT(isMaster)}" />    
        <apex:pageBlockSectionItem rendered="{!isMaster}">
            <apex:outputLabel for="owner" value="{!$Label.Owner}" />
            <apex:outputPanel >
                <apex:image value="{!ownerPhoto}" width="16" height="16"/>&nbsp;
                <apex:outputLink id="owner"
                    onblur="LookupHoverDetail.getHover('{!$Component.owner}').hide();"
                    onfocus="LookupHoverDetail.getHover('{!$Component.owner}', '/{!ownerId}/m?retURL={!URLENCODE($CurrentPage.Url)}&isAjaxRequest=1').show();"
                    onmouseout="LookupHoverDetail.getHover('{!$Component.owner}').hide();"
                    onmouseover="LookupHoverDetail.getHover('{!$Component.owner}', '/{!ownerId}/m?retURL={!URLENCODE($CurrentPage.Url)}&isAjaxRequest=1').show();"
                 value="{!URLFOR('/' + ownerId)}">{!ownerName}</apex:outputLink>&nbsp;
                <apex:outputLink value="{!URLFOR('/' + recordId + '/a?retURL=' + URLENCODE($CurrentPage.Url))}">[Change]</apex:outputLink>
            </apex:outputPanel>
        </apex:pageBlockSectionItem>       
        <apex:pageBlockSectionItem >
            <apex:outputLabel value="{!$Label.CreatedBy}" />
            <apex:outputPanel >
                <apex:outputLink id="createdBy"
                    onblur="LookupHoverDetail.getHover('{!$Component.createdBy}').hide();"
                    onfocus="LookupHoverDetail.getHover('{!$Component.createdBy}', '/{!createdById}/m?retURL={!URLENCODE($CurrentPage.Url)}&isAjaxRequest=1').show();"
                    onmouseout="LookupHoverDetail.getHover('{!$Component.createdBy}').hide();"
                    onmouseover="LookupHoverDetail.getHover('{!$Component.createdBy}', '/{!createdById}/m?retURL={!URLENCODE($CurrentPage.Url)}&isAjaxRequest=1').show();"                      
                 value="{!URLFOR('/' + createdById)}">{!createdByName}</apex:outputLink>&nbsp;
                <apex:outputText value="{!convertedCreatedDate}" />                                                                        
            </apex:outputPanel>
        </apex:pageBlockSectionItem>
    </apex:pageBlockSection>
</apex:component>

Notice lines 18 and 19, where we use the new property isMaster to show or hide the owner information based on whether this is a detail object in a Master-Detail relationship or not.

With this, the visualforce page is simplified as follows:

<apex:page standardController="Contact">
  <apex:pageBlock mode="mainDetail" >
    <c:ComponentControllerSystemInformation record="{!record}" />
  </apex:pageBlock>
</apex:page>

We include the component in line 3 inside a pageBlock tag. Our page will show the system information just as in a standard page:

systemInformation

And finally, for completeness, here is the test for the component:

@isTest
public class SystemInformationComponentTest{
  
    @isTest public static void TestComponent() {
         
        Account record = new Account(Name = 'Test');
        insert record;
        record = [select ownerId,createdById,lastModifiedById,createdDate,lastModifiedDate from account where id = :record.id];
        User owner = [select name,smallPhotoUrl from user where id = :record.ownerId];
             
        ComponentControllerSystemInformation controller = new SystemInformationComponentController();
        controller.recordValue = record;
         
        System.assertEquals(record.Id, controller.recordId);
        System.assertEquals(record.CreatedById, controller.createdById);
        System.assertEquals(record.LastModifiedById, controller.lastModifiedById);
        System.assertEquals(record.OwnerId, controller.ownerId);
        System.assertEquals(record.CreatedDate.format(), controller.convertedCreatedDate);
        System.assertEquals(record.LastModifiedDate.format(), controller.convertedLastMOdifiedDate);
        System.assertEquals(owner.Name, controller.ownerName);
        System.assertEquals(owner.SmallPhotoUrl, controller.ownerPhoto); 
        System.assertEquals(owner.Name, controller.lastModifiedByName);
        System.assertEquals(owner.Name, controller.createdByName);
        System.assertEquals(true, controller.isMaster);
    }
}

Now, our custom pages have the same functionality as a standard one.

2 Comments

(Puedes ver este artículo en español aquí)

A high percentage of the Salesforce projects I’ve seen require some kind of integration with backends systems, such as ERPs, databases, or even other CRMs. Salesforce offers a lot of options to make these integrations possible: workflow outbound messages, SOAP/REST callouts, batch integration with DataLoader, etc. Of course there’s lot of documentations on the topic, but most is based on Java and there is little documentation on using .Net. I’m a .Net developer, so I will provide a few examples on how to integrate with Salesforce using Visual Studio and C#. I can’t cover all the possible integration scenarios in just one article, so I will focus on one scenario that I consider as my first option when I design the architecture of a solution that requires integration: workflow outbound messages.

The Requirements

Allow me explain the concepts by using a concrete example: one of our clients required to integrate their Salesforce org with their ERP. Both Salesforce and the ERP manage accounts, but they required Salesforce to be the master system for accounts. The integration was based on the following rules:

  • Accounts are created in Salesforce (not in the ERP)
  • Any modification to account data in Salesforce needs to be reflected on the ERP (if the account exists in the ERP)

The above rules translate to the following:

  • We can create a trigger on account that does a callout to a web service in the ERP, but there’s an easier way (less code): use a workflow outbound message.
  • We will use a workflow outbound message in account to fire when the account is created or modified and send some of the fields we need to our ERP.

Let’s dive into code!

Workflow Outbound Message in Salesforce

In Salesforce, go to “Setup->Build->Create->Workflow & Approvals->Workflow Rules”. Create a new rule, and base it on the Account object:

Workflow account rule

Click next. Give it a name (I will use UpdateAccountOnERP) and for the evaluation criteria specify “created, and every time it’s edited”. We want this option since we would like to update the account on the ERP every time a modification is made in Salesforce. We want this workflow to be triggered every time, so for the rule criteria, choose “formula evaluates to true”, and specify true as the formula. The workflow should look as this:

Workflow account rule details

Click save and next. On the next screen you will add a workflow action. Click on the “Add Workflow Action” dropdown and select “New Outbound Message”. On the New Outbound Message screen, give it a name and unique name (I will use SendAccountToERP), for the endpoint put any url since we don’t have our web service yet, and finally select all the fields that you would like to send to the ERP. When finished, click Done. You will be taken to the workflow rule screen. Make sure you activate the workflow (click on the “Activate” button).

Account workflow rule summary

We have created an outbound message that will be triggered when an account is modified, but the outbound message will fail since we have not created a web service to handle it. For this we need to get the WSDL of the outbound message. Click on the outbound message description and you will be taken to Workflow Outbound Message Detail:

Account outbound message detail

Click on the hyperlink “Click for WSDL”. A new window with the WSDL will be opened. Save the WSDL to disc (I named it accountWorkflowOutboundMessage.wsdl), we will need it in Visual Studio.

Web Service in Visual Studio (using WCF)

Now that you have the WSDL for the outbound message, we need to create a web service to handle the message. This article gives you an overview of how you can create an asmx web service, but I rather work with WCF instead (asmx is kind of old and WCF is supposed to replace it, isn’t?). Also, I will use Visual Studio 2013 Community edition for this. Open Visual Studio and create a new empty ASP.Net Web Application:

New Asp.Net projectEmpty Asp.Net project



We will add a WCF service to the project. Right click on the project name in Solution Explorer and choose “Add->New Item”. Select “WCF Service”:

New WCF web wervice

Visual Studio will add the required assemblies to the project and will create three files: IAccountNotificationService.cs, AccountNotificationService.svc and the underlying AccountNotificationService.svc.cs. We will need to replace the interface definition to match the definition of the outbound message. Add the WSDL file to your project. Open a command prompt on the directory where the WSDL resides (if you have the Productivity Power Tools extension you can right click on the project in Solution Explorer and choose “Power Commands->Open Command Prompt”). At the command prompt type the following:

svcutil /noconfig /out:IAccountNotificationService.cs accountWorkflowOutboundMessage.wsdl

The svcutil command creates the interface for the web service definition. Here is the output of the command:

svcutil output

Notice that the command replaced the contents of the file IAccountNotificationService.cs with the web service definition of the WSDL. You project should now look like this:

Solution explorer

Now, there are some tricks to make this work. Open IAccountNotificationService.cs and change the following:

  • Put the whole class inside the namespace of the project (in my case is the name of the project WorkflowNotificationServices)
  • Change the name of the interface from NotificationPort to IAccountNotificationService.
  • Change the ConfigurationName parameter of the ServiceContractAttribute attribute from NotificationPort to AccountNotificationService.
  • Remove the parameter ReplyAction=”*” from the OperationContractAttribute attribute. This is important in order to make things work.
  • At the end of the file, remove the interface NotificationPortChannel and the class NotificationPortClient (we don’t need these since they are used by a client consuming the web service).

The interface should now look as the following (highlighted lines are the one that changed):

namespace WorkflowNotificationServices
{
    [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
    [System.ServiceModel.ServiceContractAttribute(Namespace = "http://soap.sforce.com/2005/09/outbound", ConfigurationName = "AccountNotificationService")]
    public interface IAccountNotificationService
    {

        // CODEGEN: Generating message contract since the operation notifications is neither RPC nor document wrapped.
        [System.ServiceModel.OperationContractAttribute(Action = "")]
        [System.ServiceModel.XmlSerializerFormatAttribute()]
        [System.ServiceModel.ServiceKnownTypeAttribute(typeof(sObject))]
        notificationsResponse1 notifications(notificationsRequest request);
    }

    /// rest of the code below
}

Next, open the file AccountNotificationService.svc.cs, remove the DoWork method and implement the IAccountNotificationService interface (place the cursor on text for the name of the interface and press Crtl+. and choose “Implement interface IAccountNotificationService”).

Finally, open web.config and replace it with the following:

<configuration>
    <connectionStrings>
        <add name="ERP" connectionString="Data Source=localhost;Initial Catalog=ERP;Integrated Security=True" providerName="System.Data.SqlClient"/>
    </connectionStrings>
    <system.web>
        <compilation debug="true" targetFramework="4.5" />
        <httpRuntime targetFramework="4.5" />
        <webServices>
            <protocols>
                <clear/>
                <add name="HttpSoap" />
                <add name="Documentation"/>
            </protocols>
        </webServices>
    </system.web>
    <system.serviceModel>
        <services>
            <service name="WorkflowNotificationServices.AccountNotificationService">
                <endpoint binding="basicHttpBinding" contract="AccountNotificationService"/>
            </service>
        </services>
        <behaviors>
            <serviceBehaviors>
                <behavior name="">
                    <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" />
                    <serviceDebug includeExceptionDetailInFaults="false" />
                </behavior>
            </serviceBehaviors>
        </behaviors>
        <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" />
    </system.serviceModel>
</configuration>

Notice in line 3 I’ve created a connection string to the database for the ERP. I will use this later in the implementation of the web service. Also notice the declaration of the web service in lines 18-20.

You project should compile at this point. Also, we should be able to get the WSDL for our web service. Run the project and using a browser connect to the web service. You should see something like this:

Account web service

All we have left is to write the code to update our ERP. In this example I will just call a stored procedure on a SQLServer database to update the ERP (is just an example, you can do here whatever you need to communicate with the backend). My service implementation looks like this:

using System;
using System.Collections.Generic;
using System.Configuration;
using System.Data;
using System.Data.SqlClient;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;

namespace WorkflowNotificationServices
{
    public class AccountNotificationService : IAccountNotificationService
    {
        public notificationsResponse1 notifications(notificationsRequest request)
        {
            notifications notifications1 = request.notifications;

            AccountNotification[] accountNotifications = notifications1.Notification;
            foreach (AccountNotification accountNotification in accountNotifications)
            {
                Account account = (Account)accountNotification.sObject;

                ConnectionStringSettings connectionString = ConfigurationManager.ConnectionStrings["ERP"];
                using (SqlConnection cn = new SqlConnection(connectionString.ConnectionString))
                {
                    using (SqlCommand command = new SqlCommand("salesforce_updateAccount", cn))
                    {
                        command.CommandType = CommandType.StoredProcedure;

                        command.Parameters.Add("@idSalesforce", SqlDbType.VarChar).Value = account.Id;
                        command.Parameters.Add("@name", SqlDbType.VarChar).Value = account.Name;
                        command.Parameters.Add("@number", SqlDbType.VarChar).Value = (object)account.AccountNumber ?? DBNull.Value;
                        command.Parameters.Add("@address", SqlDbType.VarChar).Value = (object)account.BillingStreet ?? DBNull.Value;
                        command.Parameters.Add("@city", SqlDbType.VarChar).Value = (object)account.BillingCity ?? DBNull.Value;
                        command.Parameters.Add("@state", SqlDbType.VarChar).Value = (object)account.BillingState ?? DBNull.Value;
                        command.Parameters.Add("@postalCode", SqlDbType.VarChar).Value = (object)account.BillingPostalCode ?? DBNull.Value;
                        command.Parameters.Add("@country", SqlDbType.VarChar).Value = (object)account.BillingCountry ?? DBNull.Value;

                        cn.Open();
                        command.ExecuteNonQuery();
                    }
                }
            }

            notificationsResponse response = new notificationsResponse();
            response.Ack = true;

            return new notificationsResponse1() { notificationsResponse = response };
        }
    }
}

It is important to set the Ack variable of the response to true, otherwise Salesforce will think there was an error and will keep the outbound message in the queue and retry to send it at regular intervals.

Testing the Outbound Message

We need to publish our web service and make it available on the internet. The publishing is outside of the scope of this article. In my case I published it on an IIS server in our DMZ, and the public url is http://http.grupolanka.com/Salesforce/WorkflowNotificationServices/AccountNotificationService.svc (don’t try it, it won’t work)

Now I need to go back to Salesforce and change the url of the outbound message I created earlier. In Salesforce go to “Setup->Build->Create->Workflow & Approvals->Outbound Messages”. Edit the outbound message definition (in the example is SendAccountToERP) and edit the “Endpoint URL” field:

Account outbound message url

Save it. Open an account and make a modification, Salesforce will now call the web service passing the fields specified in the outbound message definition.

And that’s it! You can now have Salesforce to “talk” to your backend. In a next article I will complicate things a bit and let the web service call back to Salesforce for additional data. You can get the sample project here:

4 Comments

(Puedes ver este artículo en español aquí)

Salesforce allows to access external data sources using OData protocol. This feature is actually one of the most useful feature when designing and integration strategy with backend systems. Instead of using web services or the DataLoader to put the data into Salesforce, we can get up-to-date data directly from the third-party system by using an external data source. By using an external data source, and the external object it creates, we can access information located outside Salesforce as if it were stored inside Salesforce. The only disadvantage at the moment is that an external data source is read only, so we cannot use it to update data on third-party system. But still, this opens a whole lot of integration scenarios for us.

Let’s see an example of when this might come useful. I recently worked on a project where an integration between Salesforce and the ERP was required. Sales executives needed to see Account administrative alerts (e.g. “client has pending bills”, or “client has a claim on its latest bill”, etc.) in Salesforce. These alerts were generated in the ERP by the administrative users. Clearly we needed to bring those alerts to Salesforce.

One approach would be to use the DataLoader to upload alerts to Salesforce in batch, with a predefined frequency (once a day for example). This solution is quick and easy, but requires a lot of steps: creating the Alerts object in Salesforce, setting up the DataLoader command line mode to retrieve the data directly from the ERP and do an upsert in Salesforce, and using a scheduler program to do execute the DataLoader job at regular intervals. But it turns out there is an easier way! Well, easier if you read this post :-)

Instead of putting the data into Salesforce, we will let Salesforce retrieve the data dynamically from the ERP. For this we will use an external data source and create an external object. You can create an external object based in OData (unfortunately only version 2.0, although at the time of writing this OData is at version 4.0). Let’s see how we can use Visual Studio and .Net to expose the data from the ERP.

Setting up the Example

I will use a pretty simple example. I will simulate the ERP as a SQLServer database. We have an account table and an alert table that defines the alerts for our accounts. Our ERP will look like this:

Database

Use the following script to create the database and the tables:

/****** Object:  Database [ERP]    Script Date: 12/30/2014 17:22:48 ******/
CREATE DATABASE [ERP] ON  PRIMARY 
( NAME = N'ERP', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\ERP.mdf' , SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
 LOG ON 
( NAME = N'ERP_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\ERP_log.ldf' , SIZE = 1024KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO

USE [ERP]

/****** Object:  Table [dbo].[Account]    Script Date: 12/30/2014 17:22:49 ******/
CREATE TABLE [dbo].[Account](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [name] [varchar](50) NOT NULL,
 CONSTRAINT [PK_Account] PRIMARY KEY CLUSTERED 
(
    [id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
/****** Object:  Table [dbo].[Alert]    Script Date: 12/30/2014 17:22:49 ******/
CREATE TABLE [dbo].[Alert](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [accountId] [int] NOT NULL,
    [alertText] [varchar](50) NULL,
 CONSTRAINT [PK_Alert] PRIMARY KEY CLUSTERED 
(
    [id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object:  ForeignKey [FK_Alert_Account]    Script Date: 12/30/2014 17:22:49 ******/
ALTER TABLE [dbo].[Alert]  WITH CHECK ADD  CONSTRAINT [FK_Alert_Account] FOREIGN KEY([accountId])
REFERENCES [dbo].[Account] ([id])
GO
ALTER TABLE [dbo].[Alert] CHECK CONSTRAINT [FK_Alert_Account]
GO

And populate the database with the following script:

insert into Account values('ModicaTech')
insert into Account values('Microsoft')
insert into Account values('Salesforce')
insert into Alert values(1,'Client requesting support')
insert into Alert values(2,'Client has pending receipts')

Creating the OData Web Service

Using Visual Studio 2013 Community edition, create a new empty ASP.Net Web project. We will add an Entity Framework data model to this project. Right click on the project name and choose “Add New Item”. Select “ADO.ET Entity Data Model” and name it ErpModel.

New ADO.Net data model

On the Entity Data Model Wizard, select “EF Designer from database” and click Next. Create a new connection to the SQLServer and the ERP database:

New connectionERP connection

Click Next and make sure you select “Entity Framework 5.0” as the version of Entity Framework to use, and click Next. Select both tables:

EF select tables

Click Finish. The Wizard will generate the code to connect to the database. We will now add the OData service to the project. Right click on the project name and choose “Add New Item…". Select “WCF Data Service 5.6” and name it ErpService.svc:

New OData service

The wizard will create the ErpService.svc.cs file and open it on the editor. Replace the text in the TODO for the name of the entity data model. The file should look like this:

using System;
using System.Collections.Generic;
using System.Data.Services;
using System.Data.Services.Common;
using System.Linq;
using System.ServiceModel.Web;
using System.Web;

namespace ErpODataService
{
    public class ErpService : DataService<ERPEntities>
    {
        // This method is called only once to initialize service-wide policies.
        public static void InitializeService(DataServiceConfiguration config)
        {
            config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
            // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All);
            config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
        }
    }
}

Notice line 11 where we specify the name of our entity data model. Line 16 specifies read access for all our entities. Finally, line 18 sets the version of OData to 2.0 (the one required by Salesforce). If you run the project you should see something like this:

OData service

Publish the web service to a public internet URL. I have published in an IIS in our DMZ with the URL http://www.grupolanka.com/Salesforce/ErpODataService/ErpService.svc (don’t try, it wont work)

Creating the External Data Source in Salesforce

Go to Salesforce, then go to “Setup->Build->Develop->External Data Sources” and click on the “New External Data Source” button. Specify a label and name and make sure you select “Lighting Connect: OData 2.0” as the type. In the server url put the url to access the OData service we created, and in the format select AtomPub. Specify anonymous as the authentication since we haven’t specify any security on the OData service (for simplicity). The external data source should like like the following:

 External data source

When you save it, you will click on “Validate and Sync” button:

Validate and sync external data source

Select the Alerts object and click the Sync button. Salesforce will create the Alerts__x object. Click on it to see its properties:

Alerts object

We will now create a relationship between this external object and the Account object to link alerts to accounts. But first we need to create a custom field on Account and mark it as an external id in order to make the relationship. Go to “Setup->Build->Customize->Accounts->Fields“ and in the “Account Custom Fields & Relationships” create a new custom field called External_ID. Make sure to mark it as “Unique” and “External ID”:

External id

Go to back to the Alerts external object (“Setup->Build->Develop->External Objects”). On the “Custom Fields & Relationships” section click on New. Choose “Indirect Lookup Relationship” as the field data type. Next select Accounts in the “Related To” field:

Related to

Click Next. In the target field select custom field for account we created before (External_ID__c)

Target field

Click Next. Enter a label and name for the field. Make sure you specify accountId as the “External Column Name”. This is the field in the external data source that contains the foreign key to account.

Indirect lookup relationship properties

Click Next on the rest of the Wizard.

We now need to create a Tab for the new object and add it to an application. Go to “Setup->Build->Create->Tabs” and create a new tab for the Alerts object. Select a theme. Then assign the tab to an application. You should now see something like this:

Alerts tab

Click on one of the records to see its details:

Alert detail


Seeing the External Data on a Salesforce Object

Now, let’s open an account (any account). Specify a value for the field “External Id”. Use 1 as the value so we will get all the alerts associated to the account with id 1

accountDetail

Notice that you will need to edit the layout for the related list to show the alertText field.

And this is it! We can dynamically see the data from our ERP in Salesforce. Let’s do a test. Insert a new alert for the account with id 1:

insert into Alert values(1,'This thing works!')

Refresh the account in Salesforce, you should see the new alert:

New alert


You can find the Visual Studio project here: