Backbone Requires a Main View for the App?

Tuesday, December 27, 2011

Here's a continuation of time spent considering my previous problem: Backbone.js and its view-centric MVC architecture.

Would we really want a model to be able to render itself to the page? Considering the information it has, I don't think we want it to. First of all, how does it know where in the DOM to draw itself? I suppose we could give it a specific DOM element as an attribute when create the model, then it would know where to draw itself. But, what if this model is in a collection, such as a list? How does it know where in the list to render? We could probably wrestle through this problem, but the code would get mangled somewhat, and each model instance would get pretty heavy.

It seems, then, that this task of rendering a model is best performed by either the model's collection or its view. In other MVC frameworks, a 'controller' is used to place the model's information in the view's DOM. In backbone, this is the responsibility of the 'View' object. Why is it called View instead of Controller? I believe it was named this way because it is meant to 'resolve to' HTML code. Not a name that suits the MVC style, but I can see the logic.

A Backbone app, then, requires a main view (controller) that spawns views and models, using these views and models purely for code organization's sake, delegating to them solely the task of data integrity. I started hacking at my backbone app thinking that the main view's (controller's) job is just to initialize everything, but this is not the case. Instead, it is in charge of creating and destroying the various Views and Models in your app and associating models with views.
(Still not sure of the recommended relationships between views and models. I'll have to keep reading stackoverflow's backbone tag responses to learn more.)

I'm starting to think that this framework is overkill for what I'm trying to do. Something like knockout.js seems like a smarter choice. I've heard the developer say that it is great for creating "JSON editors". That is, request JSON from the server, then bind its attributes to fields for easy editing. That is also on my list of libraries to try.

What SHOULD MVC be for Javascript? Towards other JS MVC ideas.

Saturday, December 24, 2011

With all the trouble I've been having with Backbone.js lately, I've decided to take a step back and look at my attempts from a higher level. Using a framework shouldn't be this difficult! Why, then, am I having so much difficulty? I've decided that it is because my idea of "Javascript's version of MVC" is misbased. Therefore, I've decided to start looking at the whole spectrum of Javascript frameworks - Backbone, Spine, Knockout, and JavascriptMVC (did I miss any other major ones?).

Addy Osmani (not quite sure why he's famous, but he's skilled at explaining these frameworks from a high-level) recommends JavascriptMVC, because it is the "most comprehensive" and mature framework at the moment. I hope the community is friendly, because I've decided to choose this as the next stepping stone in my Javascript MVC studies. I hope the people that created it were educated enough to use intuitive object names and abstractions, as I'll need them to improve my understanding.

When reading this introduction to JavascriptMVC, I've been inspired to return to the idea of an application's "state" once again. The author asserts that "The secret to building large apps is NEVER build large apps. Break up your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application." A agree completely with this! But, how can one make bite-sized Javascript pieces? Inevitably, these pieces must talk to each other, so how can we do this?

How about this idea for communicating between modules: Use the page's state variables to communicate between these pieces in a pub-sub manner. This manner is very much like how disparate computer systems communicate across the internet. The internet is probably the most modular system I am aware of, and if modularity is what you want, then using the internet's model is probably best.

So, the page's state. Would having a set of variables on your web page limit the scalability of your site? As long as you aren't transferring the state back to the server, you should be fine. As long as the web app talks back to your servers through a stateless API, you should be fine.

Great, so I can't think of any reason why we can't have a few state variables on our client-side page, so I'll try to use the state to facilitate communication between Javascript components. (If it is easy to do with JavascriptMVC that is.)

Backbone's Perverted Architecture and MVC

Sunday, December 18, 2011

I'm still working on getting my Backbone.js project to work. So far, I've got a text box and a list that would update itself. Not too impressive. Why am I not successful? Three reasons: (1) I've never worked in Javascript before, (2) I haven't used this framework before, and (3) I haven't used this perverted style of MVC development before.

Perverted style of MVC? Yes, Backbone doesn't tell you how to use its objects, it feels like it is saying
"Here's 3-4 Javascript objects that we like. They work great together! They have built-in behavior, which makes it awesome! However, there's a lot that ISN'T built-in, and we leave that to you to figure out. But this is what makes Backbone flexible! Great, isn't it?"

Some questions I find myself asking: How should I structure my JS app? View-centric, model-centric, or collection-centric? I would like to just manage the Javascript side of the app - just worrying about the collections or models. If I add a new model to a collection, it should automatically render the change collection to the page. I have found no examples for a model-centric app like this. I find myself with a machete in a jungle, hacking my through the objects and their relationships, trying to find the path that the Backbone designer has left for me to find.

Looking through the Stackoverflow questions and answers, it seems that it is possible to create view-only apps with Backbone. This confirms my suspicion that I've been using Backbone in a backwards way, trying to create my models first. One thing I learned today is that the idea of a Backbone View is that it 'resolves' to an HTML tree that can be applied to the page's DOM. That is, your master 'AppView' is meant to create sub-views, then  request their HTML rendering to apply onto the page. Frick! This makes no sense! So the view can't actually render itself on the page? It requires a parent view to place it in the correct place? No! Maybe if I understood the capabilities of Javascript a bit better, I could make Backbone do it this way, but I was not successful.

Next time I try to fix this app, I'll take the view-centric approach. I'll have my master AppView spawn child views, then pass them a model to hold its data. It's a very silly way of approaching the problem, but I'll do it. I'll do it just to see if backbone actually works with Visualforce. Then, I'll look into optimizing the architecture, unless I move onto a better JS MVC framework.

Since I've been taking this 'backwards' model-centric approach, I'll change gears when I stab at this project next. That will work, for sure. Then, I'll try using Spine.js, which looks like it has a much more sane architecture. Also, Spine is appealing because client interactions are completely decoupled from the server, which sounds like the ideal remedy for the slow response times between Visualforce pages and their Salesforce servers.

Initial Backbone.js on Force.com Issues

Thursday, December 15, 2011

I spent a fair amount of time the last day or two working on creating a Visualforce page that uses backbone.js as an intermediary for communicating with the Apex controller via Ajax calls. I am still doubtful whether this framework will prove to be a good way to interface with Salesforce objects and data, but I won't know until I finish.

Now, much of the time I've spent so far has been to connect the several backbone.js objects. They are: the model, the view, the controller, the HTML templates, and their duties to one another.

While backbone.js is a framework that follows the MVC pattern, backbone's view serves a different purpose than the view in other frameworks, such as Salesforce's own MVC framework of Visualforce and Apex. The purpose of Backbone's view is to bind to an HTML element, and, when appropriate, render data from the model into this element using an HTML template.

Did you also notice the interesting bit here? There are actually two views in play here! The HTML is the true (or truer) view, while the backbone.js view supplies the HTML view with logic, acting almost like a controller.
So if the view is a controller in backbone.js, then what is the controller? Well, in backbone.js, the C represents collection, which is a "smart list" that holds backbone models. Likewise, backbone models are simply Javascript objects, nothing more!

Well, that's a slight simplification, since each of these backbone objects has a bit more responsibility than I explained above. I'll explain the jobs/responsibilities of each of these backbone.js objects in my next post. For now, I want to discuss a few of the issues I've encountered.

The first problem I was having was with jQuery. It is usually necessary to use jQuery in no-conflict mode, and to alias it to a variable that is not '$', such as '$j'. Having said that, Backbone has two other Javascript libraries as dependencies, Underscore.js and jQuery, and uses them extensively under the covers. I had some problems with using no-conflict mode here, but this was solved with a few well-known tricks. Here is one way to solve the problem. This great blog post has some other solutions.

j$ = jQuery.noConflict();
j$(document).ready(function($){ /* All Backbone code goes here, called after onReady is fired. */ });


The other problem is that Backbone.js is built to use a RESTful way of GETting data from and POSTing data to the server. So naturally, I started writing a @RestResource class that will respond to these calls to retrieve data and update data in the database. This is one place I started to encounter problems - these REST handlers are located at na4.salesforce.com/services/apexrest/, which is a cross-site-request from the Visualforce page at c.na4.visual.force.com/apex/. Still not positive this is the problem, but I'm pretty sure. To resolve this, I will attempt to handle Backbone's sync calls with Javascript Remoting.

Backbone.js and AUI on Force.com - Good Idea?

Web development is an art. It is easy to make a web page, but very difficult to make a beautiful one. It is easy to make a static page, but very difficult to make a dynamic one. It is easy to make a site with multiple pages, but very difficult to make the content intuitive and interesting to navigate. A page I make may be perfect, but a friend may look at it and dislike it, or more ideally, be able to offer constructive criticisms about it.

Because web development it is such an art form, it matures over the years. In the early days, there were single page sites. Then came multiple page sites, followed by interactive/dynamic sites, and as they matured and became more user- and data-centric, became full-fledged web apps. All these changes were enabled by a maturing culture among web developers, new technology and languages, and a shifting market for consumers of the web sites and web applications.

Some web apps are data-centric, such as most apps on the platform with which I work, Force.com. What I dislike about the platform, however, is the length of time it takes for requests to return from the server and for the page to refresh. When I came across the idea of the Asynchronous User Interfaces for web sites, I fell in love with the idea. This is how a web site UI *should* feel. It feels fast like a native application while still allowing me to dynamically manipulate and synchronize data with the backend.

So why, then, are more people not making AUI web sites? I can understand if it was much more difficult, but with the right abstractions, it seems to be not too difficult at all. Choosing a simple library, such as backbone.js, spine.js, or knockout.js can make development on one of these one-page-sites much easier.

I want to give a high-level overview about how a tool like backbone.js could be used on top of the Force.com platform. Let's take a look at the building blocks we have. Force.com is the server-side database. The methods on the Apex controller is the intermediary between the client-side web page and the database. Then we have the client-side web page, which is HTML/CSS for the display and Javascript for the client-side logic.

What I want is an HTML table that is a view into a Javascript object collection. When I modify elements of the HTML table, I want the modifications to be directly modifying the Javascript object collection that is bound to it. Then, I want it to save to the Force.com database, either at random, appropriate times, or on-demand by clicking a save button. Let's see if we can make this happen.

I've have just a little experience with Javascript and web development, so I'm not sure if this is possible or a good idea, or even if it will be better than what exists. I'll find out soon enough.

Changing Object Functionality in OOP Languages

Sunday, December 11, 2011

A while ago, I had a conversation with a coworker about the maintainability of code and objects aspects of software craftsmanship. Suppose I make a method and think it's perfect. Inevitably, this method will have to change its functionality, probably due to change in requirements. In cases like this, how should one approach this refactoring? I just came across the concept of the "open-closed" principle, conceived way back in 1988.
en.m.wikipedia.org/wiki/Open/closed_principle
I originally was influenced by the idea of immutable data structures, mostly due to listening to Rich Hickey praising them in his talks on Clojure and it's concepts. Immutable data structures are interesting because they can be shared among different methods and threads safely. Immutability helps remove any surprises caused by method/function side effects. Though I haven't run into this problem before, it's a bit very attractive concept because it encourages simplicity in reading and maintaining existing code. Simplicity is king in the land of software development.
The open-closed principle, on the other hand, promotes simplicity in method/API design by promising to not change the API and its implementation. This reduces the chance of creating bugs in the callers of your methods. If the implementation of a unit of code must change, the open-closed principle instructs us to subclass and override the unit of code we wish to change. This sounds great and makes our code immutable, in a sense, but it seems to be forgetting about about another core principle of programming, which is the wonderful typing system.
This improvement/customization to the open-closed principle occurred in the 1990s, according to wikipedia. The revised version of the principle instructs us to use interfaces or abstract classes and create new implementations of them, rather than directly subclassing the class to change. This is a much better solution, as interfaces are a more precise manner of expressing functionality and purpose of an object.
It may be a challenge, and but we should remember to use the tools that our languages provide us. We should remember to use interfaces where appropriate.

The Base of Maintainable Programming Languages - Libraries over Syntax?

Tuesday, December 6, 2011

I often wonder about what language is "the best". It's fun to think about these things theoretically.

I believe that there are two relevant ways of expressing meaning in a language: syntax, idioms, and libraries. Syntax is direct meaning and the compiler won't let you write bad syntax. Idioms are common ways of using language features to solve common problems. Libraries solve big problems that requires lots of code, and abstract it into a simple interface, a new syntax.

Discussing just the library side of languages -
If a language has very powerful standard syntax, newbies will try to solve all of a language's problems with the language's standard syntactic tools. If a languages has fewer syntactic tools, then the newbie will be forced to either write lots of code to solve simple problems, or to dig into the language's community and common libraries to solve problems.

I argue that it is theoretically better to have a language with less complex syntax and a stronger community-maintained set of libraries that solve common problems. Such as Lisp/Clojure, where the language's entire functionality is constituted of its libraries. This way, if you attempt to solve a problem, you are reinventing very little, but rather, you are subscribing to an existing solution to the problem.

When others come to look at your code later on, they can see what solution you chose to subscribe to. Even better, your solution/library has a name attached to it, and that name can be Googled to find much more information about that solution on that library's website or IRC channel.

A Pattern for Portable Apex Unit Tests

Friday, August 5, 2011

Force.com is a multi-tenant platform, so if your code is running rampant, your neighbors will feel its wrath. It only makes sense that Salesforce requires developers on its platform to write unit tests. When refactoring code, unit tests will tell you when something breaks. You may find that writing your unit tests for a piece of code is too difficult, which is a sign that your design is too complex and will be unwieldy when debugging at later time or by someone else. There are pitfalls when using any unit test framework, but I want to take some time to highlight a few of the more sinister pitfalls that will be invisible to you until you fall into them head-first.

Field requirements change.
The first common design oversight is not keeping in mind the fact that field requirements may change. It's inevitable during the growth and refining of any Salesforce org, if not every software application, that requirements change. This could be that older objects have new fields, new required fields, or changed field definitions. This is fine, right? You aren't hard-coding object creation into each and every unit test, are you? What's that? To shreds, you say? Well, if you do hard-code sObject creation in your unit tests, then you're probably in the majority, so don't worry, it's definitely the most direct and simple way to write them. Let's see how that would look:

static testMethod void testCreateMyObject_shouldSucceed() {
    //Create test data
    MyObject__c object1 = new MyObject__c(Name = 'TestName', MyField__c = 50);
    
    //Invoke functionality
    Test.startTest();
    String errorMessage = '';
    try {
        Database.insert(object1);
    catch (DmlException e) {
        errorMessage = e.getMessage();
    }
    Test.stopTest();
    
    //Check results
    System.assertEquals('', errorMessage);
}

This looks fine, right? It sure does, so let's copy and paste this unit test about thirty times to test for the various edge-cases. Cool. Now fast-forward a few months to when MyObject__c gets MyField2__c added to it and made to be a required field. Ka-boom! Time to rewrite thirty unit tests!

Moving code to a different org.
Some developers will have to package their code and move it to a different org. If you are one of these developers, I'm sure you've had a multitude of issues with this before. You've got your code working great in your sandbox org, and then applied some fixes after discovering that your code doesn't account for running in an empty org and can't handle a few measly null references. After that experience, you'll be a little nervous when it becomes time to install it into a production org, or worse, a production org with complex workflows and triggers.

Fears coming true, when installing your wonderful package of joy into the production org, you find that your code may be trying to insert a standard Contact record with the bare minimum number of fields and the Contacts that are being inserted by your unit tests are being rejected. How could you know that the installing org decided to make Contact.FavoriteCRMSoftware__c a required field! Time to rewrite some unit tests!

Force.com Portable Tests Pattern
So what's a dev to do? Now that we have the foresight, what can we do to evade these errors and save ourselves some unit test re-writing? Thinking about it, we find the crux to be this question: How can we successfully insert data when we don't know the conditions for successful insertion at design time?

The suggested answer: For most cases, isn't it enough to query for an existing record in the database to use? That record is in the org, so it must already have the information required by the org. So grab one, modify the record to the state that your test needs, and go with it. If you need to test insertion, this probably won't work (though maybe you could query for a record, and set the ID of the returned record to null, then insert it as a new record. Hmm...).

I postulate that the best solution for these issues is to use the following Force.com Portable Tests Pattern (please suggest a better name). It uses a TestObjects class that acts as a kind of record factory that abstracts away an individual unit test's responsibility for creating/querying for a record to use in your test. Check out how we would change the example above:

static testMethod void testCreateMyObject_shouldSucceed() {
    //Create test data
    
    //Invoke functionality
    Test.startTest();
    String errorMessage = '';
    try {
//This method creates and inserts it for us. We could also use the createMyObject method in this case.
MyObject__c object1 = TestObjects.getMyObject();
catch (DmlException e) { errorMessage = e.getMessage(); } Test.stopTest(); //Check results System.assertEquals('', errorMessage); }

And then use a TestObjects class that will handle the creation/querying for a record to use:

public with sharing class TestObjects {
//Use the get* method if you want to do the query-first object creation.
   public static MyObject__c getMyObject() {
 MyObject__c myObject = new MyObject__c();
 try { //Try to query for the desired record first.
  myObject = [SELECT Name, MyField__c FROM MyObject__c LIMIT 1];
 }
 catch (QueryException e) { // If that fails, then create one.
  myObject = TestObjects.createMyObject('TestFeature', 50);
 }
 return myObject;
}
//If you want to skip the query-first part, just call the create* method.
//If you discover a required field in the org, you only need to change this method, not every single unit test.
public static MyObject__c createMyObject(String name, integer myField) {
 MyObject__c myObject = new MyObject__c(
  Name = name,
  MyField__c = myField);
 Database.insert(myObject);
 return myObject;
   }
}

What do you think of this structure? Do you see anything I'm missing? Can it be made more robust? Should we change the name of the TestObjects class to something else? Leave a comment below.

Alternative to the Average AJAX ActionStatus

Monday, July 18, 2011

Visualforce offers some great shortcuts that makes for faster development and a more consistent user experience. It is these tools to which Salesforce refers when they proudly advertise the short development cycles that programmers experience when working on the Force.com platform. It makes my life as a developer easier, and I really appreciate that. Sometimes, however, you want to add a slightly more advanced feature, one that requires pushing your code off of and beyond the tracks that the platform provides you.

One area that Visualforce has made very simple is Ajax. Adding just one or two Visualforce tags and attributes can produce a responsive page that uses Ajax for updating information. One such tag is the actionStatus tag. Ajax operation will work just fine if you don't use this tag, but the user won't know that an Ajax update is, in fact, taking place. Add this tag to the page and reference it from the same place that called the Ajax to tell the user, "Hey, wait a second. We've got some data coming from the server, so you should wait for it to arrive before continuing." Here's the simplest way to use this tag, taken from the SF documentation for commandButton:

<apex:page controller="exampleCon">
  <apex:form id="theForm">
    <apex:outputText value="{!varOnController}"/>
    <apex:commandButton action="{!update}" rerender="theForm" value="Update" id="theButton"/>
  </apex:form>
</apex:page>

Code Review: In this example, the secret sauce that adds the Ajax is the action and rerender attributes on the commandButton tag. Clicking on the button will send an Ajax message back to the server that will get the latest value of varOnController, which is a property of the controller object that we assume is changing with time. When the message returns to the page, the element that the outputText element creates will be replaced with the new value.

Now, let's imagine that it takes a few seconds for the controller to get this updated value back to us. The user will sit there, staring at the page, wondering if the button-click didn't work. We should give him some feedback:

<apex:page controller="exampleCon">
  <apex:form id="theForm">
    <apex:outputText value="{!varOnController}"/>
    <apex:actionStatus startText=" (working...)" stopText=" (done)" id="updateStatus"/>
    <apex:commandButton action="{!update}" rerender="theForm" status="updateStatus" value="Update" id="theButton"/>
  </apex:form>
</apex:page>

Code Review: This code should be the same as the first one, but this time we've added an actionStatus tag, which we attach to the commandButton's Ajax call by using the status attribute on commandButton. With this simple change, when the Ajax call begins, ( working...) will appear next to the outputText, and when it finishes, it will change to (done). This is a pretty good quick-and-done solution for user feedback.

The Problem: But what if we had a page that was heavy on form inputs, one that requires lots of user interaction? Is there a way to tell the user to wait before filling in more fields until the page updates? Well, I've found two possible solutions. Neither is perfect, but they both get the job done. What we want to achieve is to disable all inputs and buttons on the form while the page update is taking place.

1) The first way to accomplish this is to create a drop-down curtain effect, which drops a see-through curtain to capture clicks over the form. Here's how solution number 1 works:

<apex:pageBlockSection title="Form 1" id="formSection" collapsible="false">
  <div id="loadingCurtain">
  <apex:inputField value="{!myObject.Name}"/>
  <apex:inputField value="{!myObject.Address}"/>
  <apex:inputField value="{!myObject.OtherFields}"/>
  <apex:commandButton action="{!update}" rerender="formSection" onclick="showLoadingDiv();" oncomplete="hideLoadingDiv();" value="Update" id="theButton"/>
</apex:pageBlockSection>

Code Review: This is just a pageBlockSection that is contained inside form tags, which aren't shown. This one has three fields, though it could have more, and a button to post it back to the server. Instead of using the actionStatus tag, we're going to call our own Javascript functions to manipulate the loading curtain:

$j = jQuery.noConflict();

//This escapes SF-created IDs
function esc(myid) {
  return '#' + myid.replace(/(:|\.)/g,'\\\\$1');
}

function showLoadingDiv() {
  var divToScreenEsc = esc("{!$Component.hardCostSection}");
  var newHeight = $j(divToScreenEsc + " .pbSubsection").css("height");//Just shade the body, not the header
  $j("#loadingDiv").css("background-color", "black").css("opacity", 0.35).css("height", newHeight).css("width", "80%");
}
function hideLoadingDiv() {
  $j("#loadingDiv").css("background-color", "black").css("opacity", "1").css("height", "0px").css("width", "80%");
}

I'm using jQuery here. I've had bad experiences trying to use vanilla Javascript, and I've vowed to never use it again. jQuery gives consistent results across browser DOMs and across Javascript implementations itself. Note the esc function. This is necessary to escape the non-standard character in SF-created IDs for use in jQuery selectors, and was a solution created by Wes Nolte. See his blog post on the solution here. Beyond that, we're just selecting the loadingDiv and setting some styles, pretty simple.


2) The second solution is to avoid adding another element to the DOM, and just using Javascript to disable all selectable fields and buttons in the form:

<apex:pageBlockSection title="Form 1" id="formSection" collapsible="false">
  <apex:inputField value="{!myObject.Name}"/>
  <apex:inputField value="{!myObject.Address}"/>
  <apex:inputField value="{!myObject.OtherFields}"/>
  <apex:commandButton action="{!update}" rerender="formSection" onclick="showLoadingDiv2();" oncomplete="hideLoadingDiv2();" value="Update" id="theButton"/>
</apex:pageBlockSection>

Code Review: This is the same VF snippet as above, minus the curtain div. This time, we'll use Javascript to set the styles of the form elements themselves.

function showLoadingDiv2() {
  var divToScreenEsc = esc("{!$Component.formSection}");
  $j(divToScreenEsc).css("opacity", "0.35");
  $j(divToScreenEsc + " input, " + divToScreenEsc + " select").attr("disabled", "true");
}
function hideLoadingDiv2() {
  var divToScreenEsc = esc("{!$Component.formSection}");
  $j(divToScreenEsc).css("opacity", "1");
  $j(divToScreenEsc + " input, " + divToScreenEsc + " select").attr("disabled", "false");
}

This will disable all input and select elements in the specified form section as well as turn the entire section slightly transparent by setting the opacity to 35%.

That's it, really. Two simple solutions that I came up with to give a better user experience. Improve up it! I'm not a pro at HTML and Javascript, and maybe you can help make it better - leave a comment!

Before concluding this post, I want to mention the more elegant version of this that you should investigate:
Keep in mind that the actionStatus tag has both onstart and onstop attributes, from which you can call the necessary Javascript functions from there. The advantage to doing it this way is that the functionality is again tied to the actionStatus tag . This adds a layer of abstraction between the functionality and its visual status indicator. You would call the Javascript functions from the actionStatus tag instead of the Ajax initiating tags, such as a commandButton, keeping your code DRY and happy.

Why Did Heroku Choose Clojure? And Why Would You?

Tuesday, July 12, 2011



Not too long ago, Salesforce, a behemoth in the CRM space, acquired Heroku, a young cloud platform for Ruby and Rails applications. Nobody was sure what Salesforce's plans were when they made the acquisition, and even now, several months later, nothing but the light of speculation shines on the matter. The official words from Heroku and Salesforce say that both companies share similar philosophies/personalities, and both share a mission to make developers' lives easier and more enjoyable. This is great! I agree that both companies do a great job at developer relations. But from a business point of view, with happy developers comes unpredictable innovations, and persuading innovation to happen in your backyard is certainly a worthwhile investment!

Since the acquisition, Heroku has made some pretty consistent and sizable strides: releasing a new version of their stack, announcing support for the hot and new node.js web server, and announcing support for Clojure. As a Ruby/Rails platform, Heroku has had neither need nor want for the JVM. However, now that it has official support for Clojure, this has changed (Clojure is JVM-based language). Because this requires them to split their attention between hosting JVM apps and Ruby apps, this latest upgrade may prove to be a significant investment by Heroku. It stands to reason, therefore, that it will now be somewhat easier for Heroku to support other JVM technologies, such as Scala or Groovy, which is exciting.

But why did Heroku adopt Clojure as their newest language, as opposed to some other language? Well, functional languages have always been on the fringe, but they have been consistently gaining in popularity. According to the TIOBE language index, functional languages have 4.4 points (of 100) of popularity. Yes, this is pretty small compared to object-oriented language's 56 points, but it also shows the greatest increase in interest of all other programming paradigms, boasting a one-year delta of +1.4 points compared to object-oriented paradigm's delta of +0.6 points.

How, then, are these minority members using functional languages, and why is it growing it popularity? According to this 2011 survey of Clojure users, 62% of its users use Clojure for web development, which is the outstanding majority, followed by math/data analysis and NoSQL programming at 42% and 27%, respectively. Note, however, that an even 50% of the respondents do not yet use Clojure at work, but rather use it as a language for hobby projects. Also note the results of the "What have been the biggest wins for you in using Clojure?" section, which shows that it is chosen not for its strong concurrency support, but because it is strong functional language. Many Lisp users have adopted Clojure because it has been chosen as the modern reincarnation of Lisp, so keep this large user-base in mind when viewing the results. Clojure users also cite their appreciation of its robust, immutable data structures, and its ability to run on the JVM.

It also paves the way for fewer bugs to be written. Because it a functional language, and hails from the Lisp family, it is a highly concise language. As a concise language, Clojure allows for high functionality in few lines of code, meaning that there are fewer lines of code on which a bug can appear. Also, because Clojure data structures are immutable, there are fewer cases in which a programmer may forget to handle unexpected input in function/method definitions, meaning fewer bugs. From my research, the general consensus of functional languages is that they allow for relatively carefree programming, and that functional apps have fewer bugs and run faster than their object-oriented counterparts.

Clojure is a relatively young language, appearing in 2007, so why should you consider shifting to a functionality language like Clojure? If you are like its other users, you would choose it because it is a fast, efficient, functional language, and because it extends the bloodlines of Lisp, which has a long academic and open source history. The survey above shows that, while it may not have a solid share of the commercial sector yet, it has a real potential to gain a larger share. It would be a great choice, for example, in which to write a high-performance, scalable web app, and web-based solutions are still surging in popularity.

I have a high interest in functional languages, and how they can be used as a means of creating more efficient and scalable code. I look forward to seeing the community grow around functional languages, such as Clojure and Scala. I'll admit that Clojure's syntax, with its many parentheses, is very intimidating at first to many developers, which is definitely a hurdle to its growth. It requires education and hands-on experience to overcome this hurdle, but once overcome, its use can produce huge gains in developer productivity and application performance.

As they stated in their acquisition vows, both Heroku and Salesforce aim to make developers' lives better and to foster innovation. By looking at Clojure as a functional language, it's pretty clear that Heroku chose to adopt it for their platform because it is a strong contender as a functional language for reasons given above.

If you're interested in trying out a functional language, I suggest you try using Clojure on Heroku - it would be a pretty great learning experience. Keep in mind, that if you happen to make something awesome, because it was written in a functional language and is in the Heroku cloud, you should have no trouble scaling your web app to meet demand when it becomes supremely popular.

Using a Loading Overlay with Visualforce's ActionStatus tag for Form Refreshes

Wednesday, July 6, 2011

I'm not sure how other Visualforce developers uses the actionStatus tag, but I've discovered a different way of using it that gives visual feedback to the user that the part of the page he is working with is under an update operation.

Until now, I've used the actionStatus tag like this, relying on the start and stop facets of the element to display simple "Loading..." text:

<apex:actionStatus id="entryStatus" >
   <apex:facet name="start">
      <apex:outputText style="font-weight:bold;font-size:16px;" value="Loading..."/>
   </apex:facet>
</apex:actionStatus>

The Loading text appears, but the fields are still editable, which shouldn't be true.


This is fine for a lot of cases, but when the form has multiple input fields, I don't want the user to think that they can change the field values while they wait for an AJAX update, because their updates will be clobbered. Rather, it would be intuitive to have a slightly transparent screen, like a lightbox, appear over the form fields to prevent changes and provide a visual cue that an update is occurring on the section that the user is working with. After trying a few different methods, I refactored the solution to use the actionStatus tag, which has onstart and onstop attributes. These attributes can be used to call a Javascript function that will drop a screen down at a higher z-index/precedence than the fields, capturing subsequent clicks.

<apex:actionStatus id="entryStatus" onstart="showLoadingDiv();" onstop="hideLoadingDiv();">
   <apex:facet name="start">
      <apex:outputText style="font-weight:bold;font-size:16px;" value="Loading..."/>
   </apex:facet>
</apex:actionStatus>

<div id="loadingDiv"></div> <-- The screen element that will cover the form. Javascript will change the height of this to drop it down. -->
   <apex:pageblock id="hardCosts" title="Line Items - Hard Cost Estimate"> <-- The form element to screen. -->
      <script>
      //This Javascript needs to be inside the element to hide to get its correct Id value {!$Component.task}.
      function showLoadingDiv() {
         //Find the sizes of the div to screen.
 var blockToLoad = document.getElementById('{!$Component.hardCosts}');
 var loadWidth = window.getComputedStyle(blockToLoad,"").getPropertyValue("width");
 var loadHeight = window.getComputedStyle(blockToLoad,"").getPropertyValue("height");
 //Set the loadingDiv to screen the element at the correct size.
 var loadingDiv = document.getElementById('loadingDiv');
 loadingDiv.setAttribute('style','background-color:black; opacity:0.35; height:' + parseFloat(loadHeight) + 'px; width:' + loadWidth + ';');
      }
      function hideLoadingDiv() {
 //Find the loadingDiv to hide.
 var loadingDiv = document.getElementById('loadingDiv');
 var blockToLoad = document.getElementById('{!$Component.hardCosts}');
 var loadWidth = window.getComputedStyle(blockToLoad,"").getPropertyValue("width");
 //Set its height to 0 to hide it.
 loadingDiv.setAttribute('style','height:0px; width: ' + loadWidth + '; background-color:black; opacity:0.35;');
      }
      
      </script>

And some styles to set the initial size of the loadingDiv element and add some attractive transitions.

#loadingDiv {
   height:0px;
   width:100%;
   position:absolute;
   -webkit-transition: all 0.10s ease-out;
   -moz-transition: all 0.10s ease-out;
}

It looks better with animations, but here's the result of a rough draft of the idea:
An overlay shows that it is loading.

This is just a rough draft, so there are some features that should be added before deploying this into production. For example, adding "Loading..." text next to an animated gif on the overlay would make it even more clear that the user must wait before accessing the data again, but this is a good start.

Can you make it better? Message me on Twitter (@alex_berg) so we can improve the idea.

Awesome Force.com Posts Are Moving

Tuesday, July 5, 2011

I've been posting a number of in-depth articles here on Alex Blog for several weeks now, and it's been fun sharing what I've learned. A few people have noticed, namely my employer, and they've asked if I would start writing posts about the Force.com platform for the company blog. Sundog, factually speaking, is the greatest employer ever, so I happily agreed. Therefore, my Force.com technical articles will start appearing on the Sundog blog.

Now, I'll probably keep posting commentary about Force.com here along with links to the Sundog blog, and I'll experiment with posting about other topics that interest me, such as Rails and web application architectures, and functional languages, such as Haskell. So never fear! Educational content should keep appearing! But to keep listening to my Force.com articles, you can try subscribing to my Sundog RSS feed: http://feeds.feedburner.com/Sunblog_aberg

Check out my first post on the Sundog blog about the Comet model, about which Pat Patterson is hosting a webinar in two days.

The Comet Model: The Yang to the Yin that is AJAX Client-side Polling

Saturday, July 2, 2011

Force.com giveth and it (hopefully doesn't) taketh away. Salesforce provides a new tool or two with each release of its platform, enabling Force.com developers to quickly and easily create apps that are up-to-par with the rest of the web development world. Spring '11 brought us Visualforce dynamic binding, Summer '11 brought Javascript remoting. Now, a new Streaming API has been announced on the Force.com blog. Pat Patterson, a Salesforce developer evangelist, is hosting a webinar on July 7th to give a preview of this Streaming API.

There were a few unfamiliar terms on that blog post, such as the CometD project and the Bayeux protocol, and curiosity gave me a brain itch until I finally found time to do some research. I'll share some of my findings here, so that the reader may be better prepared for the webinar, should they choose to attend.

Everything is a solution to a problem, or so I like to think, so what problem does this CometD project solve? This stuff is all about providing a better user experience. It aims to make what's on the page more accurately reflect what's on the server. It aims to fill a niche solution space, a gap that is left unfilled by AJAX and client-side server-polling. It aims to be the yang of the yin that is AJAX and client-side polling. So, to better understand the need for the CometD project and the Bayeux protocol, we need to see precisely what is currently available to a software engineer that already has AJAX in his tool-belt.

AJAX! The ambiguous term that tech-wannabes used to describe a web page that felt responsive. These days, people have narrowed it's meaning closer to it's actual functionality, and HTML5 is the New Thing about which to speak ambiguously. Commentary aside, the part of AJAX that we are interested in is its ability to talk with the server. It has the almighty XMLHttpRequest object that can grab data from the server, which is normally triggered by either a user interaction,
<input type="button" value="Send AJAX!" onclick="jsFuncThatSendsHttpRequest();"/>
<div id="ajaxResponse"> <-- The JS method will format and place response here -->
or by long-polling, see this great example using jQuery.

Now, using Javascript like this works pretty well, but it isn't a very elegant solution. We're manually checking with the server every 500 milliseconds (or however long we set), and I'm sure the server gets annoyed when it keeps telling us that it's got nothing new, not to mention all the wasted bandwidth/requests! A more elegant solution would be to use the observer pattern, in which we tell the server, just one time, that we'd like updates, then when it actually gets an update, it sends the update to everyone subscribed, which would be our JS process.

And this is exactly what the CometD project is all about, it's a push technology of the long-polling variety. If a web server is compliant with the open Bayeux protocol, it uses the observer pattern to push updates to subscribers. In the case of programming on the Force.com platform, the platform will be able to handle a subscribe request by Javascript code on a Visualforce page. So, in summary, you can write a Javascript callback function on your Visualforce page that will be invoked with the new data every time a process on the server gets it. Nice!

You can see how this Comet model is the yang to the yin that is client-side AJAX polling. Now what we do with it is the question. GTalk, GDocs, Meebo are some specific examples that use Comet ideas, but other applications could stream financial data/sports scores or online gaming packets, both of which places the logic in the server instead of the client. It will be interesting to see how the introduction of HTML5's WebSockets will change the need for the Comet model. Some leading voices have said that all existing HTTP push solutions, such as the Comet model, are just hacks and that WebSockets will standardize it. Others, however, argue that each technology has its own solution domain and can co-exist with each other. No matter, because WebSockets is still not close to be supported in browsers as its specification and security model are still under debate, so if you are to choose a push technology today, and want it to be relevant for a few more years, the Comet method is your best choice.

Starting Apex Scheduled Jobs Without Ugly Cron Expressions

Friday, June 24, 2011

tl;dr - I show you how to use a button to start a scheduled job in Salesforce. Skip the first three paragraphs to get to the meat of this post, and lots of code.


Often in Salesforce, an administrator or developer wants to update all the records in their database for some reason. Maybe they are changing the way data is stored in a field, or maybe they are migrating it from one field to another, but what matters is that doing this can take a long time, and Salesforce's multi-tenant architecture does not like long-running processes that reduce the response time important operations, like database triggers.

To solve this problem, Salesforce appeases the developers by providing a way for them to asynchronously run operation at a lower precedence. This is a powerful feature to have, but you have to deal with using ugly (but powerful) Cron expressions to start the process. Note, however, that Salesforce uses a slightly more restrictive Cron expression, so refer to the Salesforce System.schedule documentation instead of the standard Cron documentation.

It's got some serious limitations. The only way to start a scheduled job is by writing Apex code, probably from an anonymous block or from a trigger or Visualforce controller. In a similarly restrictive manner, the easiest way to stop a job once it is scheduled is to go to Setup > Monitoring > Scheduled Jobs in the SF UI. This is not a good long-term solution to starting and stopping jobs.

So let's find a better solution. Let's eliminate the code and turn it into a 'Start' button and allow the user to supply a single number to the expression. Something like this:



Now, clicking on the 'Start' button should take the specified integer, complete the Cron expression, and pass it to the scheduler. Keep in mind that my code is assuming that this page is a snippet from one of my VF pages that uses a standard controller for a Config_Object__c sObject. Here's some example code.

Visualforce - 


<apex:pageBlockSectionItem >
    <apex:outputLabel value="Start Notifications:" for="startNotifications"/>
    <apex:commandButton action="{!startScheduledJob}" value="Start" id="startNotifications"/>
</apex:pageBlockSectionItem>
<apex:pageBlockSectionItem >
    <apex:outputLabel value="Notify on Nth Minute of Each Hour:" for="notificationMinute"/>
    <apex:inputText value="{!notificationMinute}" id="notificationMinute" size="10" maxlength="2"/>
</apex:pageBlockSectionItem>


Controller -

public void setNotificationMinute(String pNotificationMinute) {
             Integer notificationMinute = 0;
     
     try {//cleanse the input
      notificationMinute = Integer.valueOf(pNotificationMinute);
     }
     catch (Exception e) {
      notificationMinute = 0;
     }
     if (notificationMinute == null) {
      notificationMinute = 0;
     }
     if (notificationMinute < 0 || notificationMinute > 59) {
      notificationMinute = 0;
     }
  String notificationSchedule = '0 ' + notificationMinute + ' * * * ?';//and create the Cron expression
  configObject.Notification_Schedule__c = notificationSchedule;
}
public void startScheduledJob() {
     //notificationMinute is a field on the configObject sObject.
                        if (configObject == null) {

      init();
    }
    //Let's construct this: System.schedule('Notification Job', '0 49 * * * ?', new ScheduledEmailJob());
     String notificationSchedule = pConfigObject.Notification_Schedule__c;
     if (notificationSchedule == null)
          notificationSchedule = '0 0 * * * ?';//some default value
      
     if (configObject.Scheduled_Email_Job_Id__c == null) {//start the job if we haven't started it yet
     String jobId = System.schedule('Notification Job', notificationSchedule, new MyNotificationJob());
     configObject.Scheduled_Email_Job_Id__c = (Id) jobId;//Persist the jobId to the database so we can use it to stop the job later.
     }
     else if (configObject.Scheduled_Email_Job_Id__c != null) {
          //delete the existing job and start a job with this new value
          stopScheduledJob(pConfigObject);
          String jobId = System.schedule('Notification Job', notificationSchedule, new ScheduledEmailJob());
          pConfigObject.Scheduled_Email_Job_Id__c = (Id) jobId;
     }
}


Also, it would be nice to have some information about the currently running job, if there is one. This information is stored in the CronTrigger record that was created. The only handle we have on this record is the jobId, which was the return value of the System.schedule method and is the Id of the CronTrigger record that was created.

It'll look something like this on the VF page -


And here's the code behind it -

public String getStatusNotification() {
     String status = '';
     //The only handle we have on the currently running cron job is the jobId that was given to us when we scheduled the job.
     String jobId = configObject.Scheduled_Email_Job_Id__c;
     if (jobId == null) {
      status = 'Notification job not scheduled.';
     }
     else {
      try {
       //Now that we have the jobId, we'll query for the informational record in the CronTrigger record.
       CronTrigger existingCronTrigger = [SELECT State, NextFireTime FROM CronTrigger WHERE Id = :jobId LIMIT 1];
       status = 'State: ' + existingCronTrigger.State + ' - Next Notification Time: ' + existingCronTrigger.NextFireTime;
      }
      catch (DmlException e) {
       status = 'Invalid notification job Id.';
      }
     }
     //Return the status of the currently running job, if there is one, to the page.
     return status;
}


The task I leave you with, simple though it may be, is to add a 'Stop' button that will stop whatever job is currently up in the air. (Hint: try the System.abortJob method)

The LastModifiedDate Field - Behavior Differences in FeedItem vs. Other sObjects

Tuesday, June 14, 2011

TL;DR - FeedItem.LastModifiedDate does not behave the same way as a normal sObject's LastModifiedDate in, say, a master-detail relationship.


Reading through the Spring '11 release notes (PDF) in preparation for my SF 501 Cert. exam next week, I discovered an oddity. As you can see under the "Changed Chatter Objects" section on page 76, the LastModifiedDate field has been added to Chatter objects. This is great, but the definition of its behavior is what caught my interest. Here's what they say:
When a feed item is created, LastModifiedDate is the same as CreatedDate. If a FeedComment is inserted onthat feed item, then LastModifiedDate becomes the CreatedDate for that FeedComment. Deleting theFeedComment does not change the LastModifiedDate.
Woah, so the LastModifiedDate for a FeedItem changes even when you don't directly interact with that FeedItem? Not the behavior that I would expect to see. Now, I can see this as useful for Chatter feeds because the time of update is the important bit of information here, but isn't this an exception to the rule? That is, if I create two sObjects, one called 'Master' and one called 'Detail' with a master-detail relationship between the two, will the addition of a new Detail record update the LastModifiedDate of the Master record?

I would guess that it wouldn't, but I couldn't find any documentation on it, so I went to my personal dev org to test it out.

  1. I defined a 'Master' object and a 'Child' object with a master-detail relationship between them.
  2. I created a new 'Master' record.
    1. (Master's LastModifiedDate == 6/13/2011 2:45 PM)
  3. I created a new 'Detail' record on that 'Master' record.
    1. (Detail's LastModifiedDate == 6/13/2011 2:47 PM)
    2. (Master's LastModifiedDate == 6/13/2011 2:45 PM)
According to my results, adding a detail record to a master record will not change the master record's LastModifiedDate. Chatter's FeedItem.LastModifiedDate is the exception, so keep this in mind.

Easy To Write In Apex, But Not In Javascript? Use Javascript Remoting!

Thursday, June 9, 2011

New in the Summer '11 release of Salesforce.com is the GA of the shiny new Javascript Remoting. To quickly define it, Salesforce now provides the ability for you to write Javascript code on a Visualforce page that references a static method on that page's controller.

Now, there has always been a way for a Visualforce page to talk to its controller, by way of VF's actionFunction component as one among other built-in AJAX functionality, but Javascript remoting is a bit different. As its doc page tells about the differences between the two: Javascript remoting allows you to pass parameters to an Apex method and allows you to specify a Javascript callback function, whereas the actionFunction component allows you to specify rerender targets on the page and submits the page's entire form back to the controller. By communicating with the controller with the former, a light-weight JSON object is passed back and forth, but when using actionFunction, the page's entire form is serialized, sent across the wire, and deserialized again, which takes a significant amount of time. One is not inherently better than the other, but they each have their own use cases. This post is about the first use case that I found for JS remoting.

Problem

Challenge: Dynamically update the End Date when changing either Work Days or Start Date.
After estimating the time and dollar cost of a project, our process requires us to log our estimates into our Salesforce org. I was the lucky one chosen to implement the SF-side of the solution. I like to provide a friendly and intuitive user experience, so I wanted to make it very Ajax-y. More specifically, I wanted a way to  instantly update dependent values when the user enters the controlling value. This is easy with jQuery when it's just numbers and sums, but when it comes to computing dates, Javascript's client-side capabilities falter (unless a JS pro can inform me otherwise).

Solution

I had already solved this problem in Apex, for calculating this value when the record is submitted, by creating a method called getEndDateAfterWorkdays, which takes a Date startDate and an Integer workDays as arguments. I'm sure it's possible to do in Javascript, but I'm not a pro with the language and I didn't really want to duplicate the logic there (see DRY), so I looked for other solutions. Luckily for me, the Summer '11 release was just around the corner, which would allow me to use JS remoting to call the method I already defined in Apex!

The Code

Here's how it works.

1) Attach an onChange Javascript event listener to the Work Days and Start Date input fields on the VF page that will call the Javascript remoting function.

$j(".duration").live("keyup", function(){ updateEndDate(); });
$j(".startDate").live("change", function(){ updateEndDate(); });


//using javascript remoting, send the startDate and the duration to the server, get the endDate back
function updateEndDate() {
$j(".startDate").each(function() {
//Get the necessary info from each row in the list.
var startDate = $j(this).parent().parent().parent().find(".startDate").val();
var duration = $j(this).parent().parent().parent().find(".duration").val();
var endDateId = $j(this).parent().parent().parent().find(".endDate").attr("id");
//Remotely call the controller method.
CtlrEstimateEdit.getEndDateAfterWorkdays( startDate, duration, endDateId, function(result, event) {
//This callback function doesn't remember the row from which it was called.
// My solution: pass the Id of the destination element into the function and pass it back into this callback.
var resultArray = result.split(",");
var resultEndDate = resultArray[0];
var resultSpanToInsertId = resultArray[1];
if (event.status) {//if successful
var spanToInsertIdEsc = esc(resultSpanToInsertId);
$j(spanToInsertIdEsc).html(resultEndDate);
}
});
});
}


2) Add the remoting method to the controller for the Javascript to hit.

@RemoteAction
global static String getEndDateAfterWorkdays(String pStartDate, Integer pWorkDays, String pEndDateIdToPass) {
Date startDate = Date.parse(pStartDate);
Date endDate = startDate;
Integer durationInCalendarDays = HlprLaborEstimateMasterTrigger.workDaysDurationToCalendarDaysDuration(pWorkDays - 1, startDate);//Minus 1 to start working that day
endDate = startDate.addDays(durationInCalendarDays);
//now convert back to a string to send back to the javascript
DateTime endDateTime = DateTime.newInstance(endDate, Time.newInstance(0, 0, 0, 0));
String endDateToReturn = endDateTime.format('MM/dd/yyyy');
//add the endDateId of the span to which to write this new value. I couldn't figure out another way but to push this value through here
String endDatePlusSpanIdToReturn = endDateToReturn + ',' + pEndDateIdToPass;
return endDatePlusSpanIdToReturn;
}


That's it, really, unless I forgot to paste some code here. (Let me know if I did, please.)

Besides enforcing the DRY coding principle, what other use cases have you found for using Javascript remoting?

Adding Visualforce pages to the Home Tab by Using HtmlArea Components

Tuesday, June 7, 2011

The Salesforce home tab is a pretty great place to start your day...IF you take the time to set it up and personalize it to your needs. This the stage I'm at - trying to get useful information onto my homepage.  I've decided to make a Visualforce page that will display a list of items for me to look at for the current day. I'll take this custom page and display it in a home page component of type HtmlArea. Take a look at this Force.com blog post to see the direction of my intentions, except I'm using an iframe to display my VF page. I was able to knock a simple page pretty quickly. The tricky part, however, was coercing the home tab page to display it in its full glory.

<iframe src="/apex/ItemsForToday?isComp=1&showAll=0" frameborder="0" width="100%"  height="300px"></iframe>

Here you can see the HTML code that I used to embed my Visualforce page into the home tab component of type HtmlArea. After saving this, I went back to my home tab to see my new product. Sadly, it only gave my component maybe 60px of height! (scrollbar-ing the rest) What happened?

After investigation, it seems that there is a Salesforce CSS style that is overriding the height attribute on this iframe. The quick and easy solution that I came up with is this:

<iframe src="/apex/TaskUserPrioritiesManager?isComp=1&amp;showAll=0" frameborder="0" width="100%" height="900" style="height:315px;"></iframe>

Just add a CSS style directly on the iframe tag. This style has a more specific scope than the one Salesforce is using, so ours takes precedence. Now, when returning to the home tab, I see that my VF page is looking proud on my homepage.

Addressing Visualforce View State and Controller Heap Space Problems

Tuesday, May 31, 2011

Force.com makes it easy to write an MVC page with AJAX functionality and easy access to the data in your organization's Salesforce database. So easy, in fact, that the developer may feel TOO free from the shackles of normal implementation details and choose to go crazy with the data. Continuing on this vector, a practicing Salesforce developer will, without doubt, hit a governor limit on either the view state in the page or the heap size in controller. Be prepared for that day by learning about these two related things!

Let's start with the heap size error that you may hit: "Apex Heap Size is too Large". The 'heap' that is referenced is very similar the the less-abstract programming heap that you may have met in C++, Java, or other languages. When you declare a new variable in these other languages, memory is allocated for it in one of two places, either the heap or the stack. Similarly, your Apex controller is allocated memory space from the Force.com platform, and, because of hardware limitations and the multi-tenant architecture, you are only given a certain amount to use. When you request more than that amount of memory, SF gets angry and stops executing your code.

How do we solve this? Well, the biggest thing in your controller that is using heap space is your variables, so re-think your usage of them. Identify any variables or data structures that could be holding lots of data. Check out the Limits methods, and sprinkle a few System.debug('current heap usage: ' + Limits.getHeapSize()); statements around your code. The first place to look is at loops and any variables that are populated directly from a SOQL query. A simple, and inefficient, example:

Map<Id, Lead> notJohnsonLeadIndexMap = new Map<Id, Lead>([SELECT Account.Name, LastName FROM Lead WHERE LastName != 'Johnson']);
for (Lead notJohnsonLead : notJohnsonLeadIndexMap.values()) {
   if (notJohnsonLead.Account.Name == 'Acme') {
      leadToEmailList.add(notJohnsonLead);
   }
}

That SOQL query could bring in a huge flood of data, which we aren't even keeping. We could construct the query to better filter the returned results, or we can move the SOQL into the for definition to allow the compiler to optimize by using its QueryMore functionality and minimizing the amount of data stored on our controller:


for (Lead notJohnsonLead : [SELECT Account.Name, LastName FROM Lead WHERE LastName != 'Johnson']) {
   if (notJohnsonLead.Account.Name == 'Acme') {
      leadToEmailList.add(notJohnsonLead);
   }
}



Now, on to the view state in a VF page. It is a solution to a problem - the problem of how to synchronize the users's last page state between the client and your code in controller. HTTP is a stateless protocol by definition, which is part of what makes it so fast and efficient, but keeping state between the client and the server in this modern era is a necessity, so a work-around like the view state is necessary. What's the view state look like?

 You can see it in any Visualforce page that has form tags by right-click > View Source.


As of the Winter '11 release, SF provides a view state debugging tool that you can enable in your VF development mode footer. It's great for seeing what's taking up all that space in your view state, even breaking it down into percentages of total.

So, now you can see what's in your view state and what you should try to fix, but how do you do that? Well, any variables/data structures that you define in your controller are serialized into the view state and sent to the client along with the HTML code. Knowing that, you can again take extra care to ensure that all variables in your controller are being efficiently used and contains only the minimum required data. The transient keyword was created for pretty much this purpose, so try using it. By marking a variable as transient, it will be discarded when the controller has finished its job and the page is rendered. The downside is that this variable will have to be recreated each time the page loads, but that's a trade-off. For further reading, TehNrd's blog post on this topic is very informational and you have to check it out if you need more detailed explanation. Also, wiki.developerforce.com has a nice introduction to Visualforce view state.

Why use the Master Trigger pattern?

Tuesday, April 26, 2011

Continuing my previous thoughts on the proper usage of before and after triggers, I'll share with you a problem that may arise if you choose to architect your trigger in an unorganized way.

For those of you that want the TL;DR version, skip the next 3 or 4 paragraphs of background info.

We have tree-like relationships between some of our data for an internal project management Force.com app that look like this (I haven't used UMLs in awhile, so don't read it thinking it's UML):
(Now, I realize that the pro SFers who are reading this may see that I am missing some killer functionality that Salesforce offers, but please don't share your suggestions to improve this data model, because I am leaving out the details for the sake of simplifying the explanation.)

When a time entry is inserted or updated, it tells its task to update itself because it has fields that summarize how much time has been entered on it. When a task updates, it will sum its child time entries and update some of its field values. Because the project also keeps track of the time entered on its tasks, the task forces its project to update itself.

When a time entry, task, or project moved, I was seeing a "Too many SOQL queries: 101" error because of all the recalculation that had to take place. This was a common error that I knew how to solve, but solving it, I discovered, required re-structuring and organizing the triggers on time entry and task.

Like most other developers, I slowly added features to this app as they were requested, placing each new feature in its own trigger. After seeing that I had 7 triggers on task, 4 triggers on time entry, and 4 triggers on project, I decided that it was time to get these under control by using a Master Trigger pattern, arriving at a variation of the pattern described by this blogger.

After moving the trigger logic into methods of a static helper class called by a single master trigger, I discovered that I was updating the parent task twice! Once in a before trigger and once in an after trigger! It took some more refactoring, but I was able to remove the update call from the before trigger and fix the problem.

Learn from my mistakes! Separate the duties of the before and after trigger! The before trigger is used to make the record fix itself, the after trigger for fixing others records.

Having learned my lesson, I have since always used the master trigger pattern to organize my code, and always consult Mr. Before and Uncle After when deciding where to put logic in a trigger.

Salesforce Triggers, Anthropomorphized. Mr. Before & Uncle After

Friday, April 22, 2011

(I give you permission to skip the first 3 paragraphs if you are a TL;DR kinda person)

As a Force.com developer, I find Salesforce triggers to be very powerful, because of their simplicity and ease-of-use. It's a simple concept, very much like 'events' in other languages, triggers are called when a record is CRUDed to or from the database. Do you desire functionality that is so complex that a formula field can't handle it? Add a regular field to the object and populate it on a before trigger. Do you want to update a child record when a field or fields on its parent record are modified? Add code to the after update trigger on the parent object to check for field modifications and then update the appropriate child record.

Now, I'm not here today to teach you the 101, 102, and 103 classes on Force.com triggers, I'm here to warn you of the unorganized mess that your triggers and its code will become as it grows in size. Like slowly warming the water in a crustacean's tub to keep it from noticing that it is, in fact, being cooked, triggers and classes will slowly start growing in your org. After you return from you week-long vacation in Maui, you will sit down, refreshed, grab the next thing in your to-do list, and start searching for a good place for this addition, or worse, bug-fix. Panic will strike you and you will be overcome by regret, regret for not reading this blog post earlier and realizing that you should get your steaming pile of code organized.

So, how do you get your steaming pile of code organized, you ask? Well, you can start by using triggers for their intended purpose. Not all triggers are created the same, you see. There are plainly visible limitations on some triggers, such as the Id, CreatedById, and LastModifiedDate standard fields having no value in a before insert trigger, or that all records in trigger.new are null in an after delete trigger. Then there are the invisible characteristics of triggers. What characteristics are those, you ask? Allow me to anthropomorphize them for you.

Triggers are divided into two categories: before and after. I'm sure you know what the difference is, but do you know how their differences feel?
The code in a 'before' trigger will run before the transaction occurs, so this should only be used for either validation or for populating its own too-complex-to-be-a-formula fields. This is where any decisions about the incoming data should be made, like forcing it to update its Club Card membership before entering the party, or bouncing the data, because it wasn't on your list. This all must occur *before* the data has been committed to the database - before it's entered the party. It's a perfect match. (Yes, Mr. Before is a bouncer at a club.) The one thing that you should restrain yourself from doing is using DML statements here. I suggest that these are better left for its sibling, the 'after' trigger.

The code in an 'after' trigger will run *after* the data has been committed to the database. The record received the all-clear and you gave it a flower for its lapel, and you allowed it to enter the party. This is where you tell other people that the record has entered the building. Do some SOQLs here to find the BFF object, and tell the BFF object to suit up, because it's game-time. Query for the DJ object and have it pull out the records that our new member enjoys. Give an update to each compatible female that our new member will encounter, telling them how awesome he is. Yes, Uncle After is the club owner, and he directs the rest of the objects to ensure that all club members are in-the-know about the current state of the club and its new member.

Anthropomorphized, signed, and sealed. Keep YOUR club organized and informed and YOU will be happy.

The "ConvertLead" Lead Conversion Process of Salesforce

Tuesday, March 8, 2011

I've been debugging some custom code that is operating around the Lead Conversion process of Salesforce. This is a big feature of the standard Salesforce platform - the ability to take the data from a fresh Lead and effortlessly transfer it into a new Contact, Account, and/or Opportunity.

We have some custom fields on the standard objects, and the value of this field on the Lead standard object was not properly keeping its value through the lead conversion process into a Contact. There is very little documentation on this process (see the convertLead() method and the ConvertLead Operation) so I was forced to sprinkle debug statements around my code and check the debug logs. I'll summarize the details of the ConvertLead process and its order of operations that may be of use to another Apex developer.

When the ConvertLead process is called, either by pressing the "Convert" standard button on the Lead page or by calling the convertLead Apex method in code, the same steps are followed on Salesforce's side. This is how my echo-location view of how it plays out:

Clicking the "Convert" button does this in this order:

  1. Insert new Account or update existing Account
  2. Insert the new Contact
  3. Insert new Opportunity (optional)
  4. Update references
    1. All records with fields that pointed to the old Lead are updated to point to the new Contact
    2. If an Opportunity was created, all references are updated to point to the new Opportunity instead of the new Contact
  5. Update old Lead
    1. The isConverted and other conversion-related fields are updated to 'true'

Now, remember that triggers and validation operations occur before and after each of these DML operations. This order of operations was our biggest question mark when confronting our bug, but we now understand it better, if only by a little. By looking at these notes, because all references will point to the optional Opportunity if it is created, it seems that Salesforce deems the Opportunity to be the more important of the two newly created sObjects (Opportunity and Contact). Keep this little detail in mind as you debug your code.

(Just a guess, but I would guess that these all occur under the same database transaction, and if one part fails, the transaction rolls back and an error is displayed to the user.)