tag:blogger.com,1999:blog-84106101302288392752024-03-21T20:25:16.881-05:00Alex BlogAlexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.comBlogger40125tag:blogger.com,1999:blog-8410610130228839275.post-12680337403026830862013-08-12T02:26:00.002-05:002013-08-12T02:26:41.798-05:00Incompleteness Theorem and the Value of Programming Paradigms<h3>
Crisis of Mathematics Foundation</h3>
<br />
Recently, I am reading about computational mathematics and its history. One interesting topic is the <a href="http://en.wikipedia.org/wiki/Foundational_crisis_of_mathematics#Foundational_crisis">crisis of mathematics' foundations</a> in the early 1900s. This summary is my interpretation. In this episode, mathematician <a href="http://en.wikipedia.org/wiki/David_Hilbert">David Hilbert</a> becomes worried that his field's foundations, proof by logic and axioms, is crumbling. Why does he think so? People have found paradoxes, unexplainable situations which arise by following his basic set of tools. This opens his field attack, to be called 'nothing more than useful tools', rather than 'essential truths of the universe'. Everyone wants to spend their time on subjects of importance, right?<br />
<br />
So, if his field's basic tools, its axioms and theories, <em>were</em> complete and true, the mechanics of any one thing could be explained and calculated. However, simple paradoxes are found which his field can't explain. For example, the <a href="http://en.wikipedia.org/wiki/Barber_paradox">barber paradox</a> is a logic puzzle which is an application of <a href="http://en.wikipedia.org/wiki/Russell%27s_paradox">Russell's paradox</a>, in set theory. Set theory mathematicians build theories which can explain the relationships between everything. If you ask a set theorist to explain the barber paradox, he will start to question whether his system's basic tools 'make sense'.<br />
<br />
So you can see that if the usefulness of the basic tools of a mathematician's field are questioned as illogical, he will worry that he is spending his time building a useless system and search to extend his rules to cover this logical hole.<br />
<br />
<h3>
Perceived Crisis of Programming Tools</h3>
<br />
Does this story sound similar to you? I'm a computer programmer, and I often read blogs and such from other programmers. If you don't know, most programmer's discussion topics are "language X is better than language Y", or "tool X is better than tool Y". This is not bad, since a good programmer always asks "Can I do this better?". Of course he must listen when another programmer says "This way is better".<br />
<br />
So, what's the answer? Which language is best? There is so much discussion on this topic, so someone surely has found an answer. Sadly, no. If you ask a rational and experienced programmer, he will have decided that "The right language depends on the problem you are solving".<br />
<br />
<h3>
Kurt Gödel's Logical Discovery</h3>
<br />
Back to the world of mathematics. Hilbert, like a good programmer, wanted to work with a perfect set of tools, a perfect set of axioms and theorems. What is a "perfect" set of axioms? It is a consistent and complete set, such that they can be composed to explain anything. So, Hilbert is asking for a single set of terms, a single language, to use for all mathematics. Is this possible?<br />
<br />
To answer Hilbert, mathematician <a href="http://en.wikipedia.org/wiki/Kurt_G%C3%B6del">Kurt Gödel</a> proved something interesting about mathematical logic. He showed that inherent limitation exist of any set of tools, that is, any system of axioms used for natural number arithmetic. In his <a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems">incompleteness theorems</a>, he proved that if you create a set of all consistent and true axioms, it can not explain all truths. That is, as a user of these axioms, there will always be statements which you know are true, but you can not prove them using your set of axioms.<br />
<br />
(Side-note: This theorem has been referenced as an axiom to prove many things, from 'god exists and you need faith' to 'math and logic is useless'. However, I believe I am correct to say that these useages are incorrect because the logic in Gödel's theorem means the theorem can only be applied to systems of arithmetic and in the context of systematically generating a complete set of axioms.)<br />
<br />
<h3>
Judging Legitimacy of New Languages</h3>
<br />
How is this related to programming paradigms? Consider this extrapolation I made (possibly illogical, but still interesting) from Gödel's idea. Suppose: I have a statement which I know is true, but I can't prove it using our set of axioms. What to do? After looking closely, I see that I can slightly modify one of our axioms to create a new one, thereby enabling me to prove my statement. Can I do this? Is my new branch of math a legitimate one? I say yes; if it can be used to correctly describe the problem in a cleaner, more consistent way, then yes, its legitimacy is decided by its practicality and usefuless.<br />
<br />
This story isn't wholly based in my imagination. Consider <a href="http://en.wikipedia.org/wiki/Hyperbolic_geometry">Hyperbolic geometry</a>, which is a relatively new field of mathematics. It takes classical geometry, modifies a single axiom (changing the definition of parallel), and calls it a new kind of math. This new system of geometry is very practical, as it provides tools to solve previously-undescribable problems. Some of these problems are listed in <a href="http://math.stackexchange.com/questions/93765/what-are-the-interesting-applications-of-hyperbolic-geometry">this discussion</a>.<br />
<br />
<h3>
Changing a Single Axiom in Programming Languages</h3>
<br />
How awesome. By changing a single basic assumption, a new way of thinking is developed to easily explain difficult things. In my opinion, this is exactly the criteria we should use to decide whether a new programming language should be used. A useful programming language should obey a single practical philosophy, that is, a single set of axioms.<br />
<br />
For instance, consider <a href="http://en.wikipedia.org/wiki/Clojure">Clojure</a>. Traditional programming is based on manipulating variable values through a series of subroutines. Based on years of experience, the creator of Clojure decided that this idea of mutable state is really bad for modern applications, which is more and more using concurrently-running processes. Clojure restricts a programmer to immutable state, which theoretically makes writing programs using shared state safer and easier.<br />
<br />
Another example is <a href="http://en.wikipedia.org/wiki/Erlang_(programming_language)">Erlang</a>. Traditional programming is based on creating one executable program or process to run on a single machine. The practical world of software development created the notion of software services, such as <a href="http://en.wikipedia.org/wiki/Web_service">web services</a>, which should be always available. The creators of Erlang decided this idea is essential. By constructing their language around this core idea, they decided that pure functions and fault-tolerant processes are implied essences. By restricting a programmer to use only these ideas, Erlang programs theoretically have few concurrency-based bugs and very little downtime in production.<br />
<br />
<h3>
Final Thoughts</h3>
<br />
Difficult problems can become trivial problems if we change a single rule of the game. Applied to programming languages, I believe this requires more than adding a library to support this idea. Essential to injecting a new paradigm into a language is restricting the parts of the language which run contrary to the paradigm. John Carmack mentioned his recent research on functional programming languages in his recent talk at his <a href="http://www.youtube.com/watch?v=1PhArSujR_A&list=PLqSz8wYk5VJTsadQnU9EId6G0AJWA6o0q">QuakeCon 2013 keynote talk</a>. He supports this idea of restricting a programmer when following a new paradigm:<br />
<br />
<pre><code> "Everything that is syntactically legal that the compiler will accept will eventually wind up in your codebase."
"Languages talk about multi-paradigm as if it's a good thing, but multi-paradigm means you can always do the bad thing if you feel you really need to."
</code></pre>
<br />
To liberally interpret his words, tool designers care about supporting a certain way of problem-solving, whereas programmers care about quickly and directly solving the problem at hand.Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com2tag:blogger.com,1999:blog-8410610130228839275.post-6458509569970319942013-02-03T11:39:00.001-06:002013-07-11T21:54:10.782-05:00Build Your First Ruby on Rails App<h2>Foreword</h2><p>I am interested in learning Ruby and its Rails website framework for two reasons:<br />
1) To build quick websites.<br />
2) Ruby seems like a useful tool.</p><p>This code article will simply be my notes from the Dreamforce 2013 developer session named "Hands-on Ruby on Rails - Build your First App". Here is its <a href="http://www.youtube.com/watch?v=tPy4F4GYVmQ">recording on YouTube</a> that I'll be using, here are the <a href="http://www.slideshare.net/developerforce/df121326-wall">slides on SlideShare</a>, and here is the <a href="https://github.com/sfckemp/sf13-blog-example">code artifacts on GitHub</a>.</p><br />
<h2>Session Agenda</h2><ul><li>Introduction to Ruby, Rails, and Heroku</li>
<li>Getting Started</li>
<li>Building a Blogging App</li>
<li>Deploying to Heroku</li>
<li>Q&A</li>
</ul><br />
<h2>Intro to Ruby and Rails</h2><ul><li>Ruby was created in 1995 by a Japanese guy known as Matz.</li>
<li>Ruby designed to be a cross-platform, simple scripting language.</li>
<li>Rails is a web app framework which uses MVC, which helps organize code.</li>
<li>Rails is designed to be simple by using fewer config files and good architectural patterns.</li>
<li>Rails uses routes, which means it is easy to make a RESTful web app.</li>
<li><p>Rails has a mantra - convention over configuration. One of those is directory structures:</p><ul><li>app/controllers</li>
<li>app/helpers</li>
<li>app/mailers</li>
<li>app/models</li>
<li>app/views</li>
<li>config/</li>
<li>db/</li>
</ul></li>
</ul><br />
<h2>Getting Started with Heroku</h2><ul><li><p>Heroku is nice and you should use it.</p></li>
<li><p>Install Rails Tools</p><ul><li>Install Heroku Toolbelt (CLI, Foreman, Git) - (I previously had installed)</li>
<li>Install RVM, RubyGems, and also Homebrew if on Mac</li>
<li>Install Ruby 1.9.2 (I previously had installed)</li>
<li>Install Rails</li>
<li>Install PostgreSQL</li>
<li><a href="https://devcenter.heroku.com/articles/quickstart">Heroku Quickstart</a></li>
</ul></li>
</ul><br />
<h2>Installing rvm</h2><p>The speaker assumes the audience already had Rails installed. I do not, however, so I'll install that now. I'll be using the <a href="http://railsapps.github.com/installing-rails.html">RailsApps install guide</a>.</p><p>This guide recommends using rvm to manage Ruby and Rails versions. Why do we want rvm? It manages your app's libraries and frameworks which simplifies upgrading versions. First, rvm downloads and installs Ruby versions and gem versions into its repository which is in the <code>~/.rvm</code> directory. Then, rvm sets all references to these, which is how your app finds them, such as the PATH.</p><ul><li><a href="https://rvm.io/rvm/install/">Install rvm</a><br />
<ul><pre class="prettyprint lang-bsh"><code>$ \curl -L https://get.rvm.io | bash -s stable --ruby</code>
</code></pre></ul></li>
</ul><br />
<h2>Quick rvm Tutorial</h2><p>To specify and maintain the version of Ruby we want to use, specify the version like this. (Note: To see all Ruby versions that rvm supports, execute <code>rvm list known</code>.)</p><pre class="prettyprint lang-bsh"><code>$ rvm 1.9.2-head
</code></pre><p><code>rvm</code> enables you to create a 'gemset' to manage versions of dependencies, such as libraries or frameworks. Before upgrading to a new version of Rails, the guide recommends using rvm to create a <code>gemset</code> container in which to test it.</p><pre class="prettyprint lang-bsh"><code>$ rvm gemset create rails222 rails126
</code></pre><p>With a gemset for the new Rails version, you can switch to that container and install Rails inside it.</p><pre class="prettyprint lang-bsh"><code>$ rvm 1.9.2-head@rails222
$ gem install rails -v 2.2.2
$ rvm 1.9.2-head@rails126
$ gem install rails -v 1.2.6
</code></pre><p>Now that a gemset is defined for each version of the Rails gem, we can switch to each version of Rails like this.</p><pre class="prettyprint lang-bsh"><code>$ rvm 1.9.2-head@rails222
$ rails --version # Rails 2.2.2
$ rvm 1.9.2-head@rails126
$ rails --version # Rails 1.2.6
</code></pre><br />
<h2>Installing Rails</h2><p>While we won't be using these advanced features of rvm, knowing the extent of this tool is still useful. We will be using just one feature of rvm, which is to install the latest Ruby version. If you didn't already do it, install the latest Ruby like this.</p><ul><li><p>Install the Latest Ruby</p><pre class="prettyprint lang-bsh"><code>$ rvm 1.9.2-head
</code></pre></li>
</ul><p>Now that we have the latest Ruby, we can continue following the Rails install guide <a href="http://railsapps.github.com/installing-rails.html">here</a> at the 'Install Rails 3.2.11' section.</p><p>The rvm package includes RubyGems, which is a package manager for Ruby. This tool makes it easy to use frameworks and libraries that exist in the Ruby ecosystem by centrally hosting various versions of registered packages or arbitrary code. We will use RubyGems to install the Rails package.</p><ul><li><p>Install Rails</p><pre class="prettyprint lang-bsh"><code>$ gem install rails
$ rails -v
</code></pre></li>
</ul><p>I am using Ubuntu 12.04. Rails will attempt to use SQLite as a default database solution. Bundler will install the <code>sqlite3</code> Ruby package, which is the Ruby interface code to the SQLite database software. If the SQLite software doesn't exist on the system, this Ruby package can't interface with it, and will cause an error. I haven't installed any database software yet, so generating a default Rails app will fail. To solve this, we need to install the <code>sqllite3</code> database software using apt-get.</p><ul><li><p>Install SQLite</p><pre class="prettyprint lang-bsh"><code>$ sudo apt-get install sqlite3 libsqlite3-dev
</code></pre></li>
</ul><br />
<h2>Using Rails</h2><p>Rails is not only a web app framework, but it is also a code generator. Rails can generate a complex application that has many components. Because these components work together out of the box, your job as a web app dev is to customize the app, rather than spending lots of time gathering app components, connecting them together, and testing them.</p><p>So, let's tell Rails to generate a new app for us. It only requires us to name the app. Let's name it 'blog'.</p><pre class="prettyprint lang-bsh"><code>$ rails new blog
</code></pre><p>This creates the directory structure for a web app and fills it with default HTML, CSS, and JS files. It also sets up the HTTP server and Ruby classes that handle HTTP requests. Most importantly, this generates our app's bundle file.</p><p>There are many libraries and third-party apps, such as an HTTP server, that compose our app. These dependencies are all collected into a single location, called a 'bundle' file. A tool called 'Bundler' reads this file to download and install these dependencies into our app for it to use.</p><p>One of these dependencies is a database solution.</p><br />
<h2>Setting up Databases</h2><p>Rails uses SQLite as its default database. This is a nice database because it's fast, which makes development more fun. However, because the production environment might require a heavier database, we might want to use two different databases - one for development and one for production. The PostgreSQL database has some really cool features, and Heroku likes this database, so I want to use it in production. Rails supports using multiple databases like this, so let's take a look.</p><p>Bundler installs our database software. We can ask Bundler to install SQLite if the environment is called 'development', and install PostgreSQL if it is called 'production'. Let's set up Bundler to do this.</p><ul><li><p>Define different development and production databases</p><ul><li>Open the <code>gemfile</code> file, which is located in the rails app's root directory.</li>
<li>Find the <code>gem 'sqlite3'</code> statement</li>
<li><p>Replace this line with the following</p><pre class="prettyprint lang-rb"><code>group :production do
gem "pg"
end</p><p>group :test do
gem "sqlite3"
end
</code></pre></li>
</ul></li>
</ul><p>Now, when this app is deployed to production and installed, Bundler will run, see that it's on production, and install PostgreSQL instead of SQLite to use. We can run Bundler on our app right now, in test mode, and I believe it will still download PostgreSQL, but not use it.</p><p>If we try to install this additional dependency now, it will fail because we haven't installed the PostgreSQL software to which this gem will attach. So let's install PostgreSQL.</p><ul><li><p>Install PostgreSQL and header files for libraries</p><pre class="prettyprint lang-bsh"><code>$ sudo apt-get install postgresql-9.1
$ sudo apt-get install libpq-dev
</code></pre></li>
</ul><p>We can test this by running Bundler on our app again.</p><ul><li><p>Install app dependencies with Bundler</p><ul><li><p>Navigate to the Rails app's root directory.</p><pre class="prettyprint lang-bsh"><code>$ bundle install
</code></pre></li>
</ul></li>
</ul><br />
<h2>Set up Data Model & Scaffolding</h2><p>Our blog will be simple - its posts will have only a 'title' and a 'body'.</p><p>Adding a new database object to an application traditionally isn't as simple as clicking an 'Add' button. There are a few places to make this addition, namely in the database, in the persistence layer definitions, and Ruby classes to match the new object. These changes can be predictable for most use cases, so Rails can generate this code for us. In Rails, this stack of code for using and persisting an object in this manner is called a 'scaffold'. So, let's have Rails make this scaffold for us.</p><ul><li><p>Make a new persistable object in Rails</p><ul><li>Navigate to your app's root directory</li>
<li><p>Execute the following command to define and implement a new model named 'Post'</p><pre class="prettyprint lang-bsh"><code>$ rails generate scaffold Post title:string body:text
</code></pre></li>
</ul></li>
</ul><p>Note: You may have no problem, but I ran into an issue on Ubuntu. The result from running the command above is "Could not find a JavaScript runtime.", which I think is related to auto-compiling CoffeeScript. I really don't like installing Node.js just to allow Rails to auto-compile CoffeeScript, but this was my only way forward. A gem called <a href="https://github.com/cowboyd/therubyracer"><code>therubyracer</code></a> can be placed in the GemFile, which should fix this, but it didn't work for me. Maybe this issue is fixed in Rails 4. I'll check later. For now, let's manually add a JavaScript runtime, we must install Node.js. After installing this, run the command to generate the scaffold again.</p><ul><li><p>Install Node.js</p><pre class="prettyprint lang-bsh"><code>$ sudo apt-get install nodejs
</code></pre></li>
</ul><p>When the scaffold script is running, you can see it create some Ruby files, some Ruby HTML templates, and some CSS files. Let's feel lucky we didn't have to write that ourselves.</p><br />
<h2>Customize Model's View Page</h2><p>In customizing our app, we will spend majority of our time in a few places.</p><ul><li><code>app/views/</code></li>
<li><code>app/controllers/</code></li>
<li><code>app/models/</code></li>
<li><code>config/</code></li>
</ul><p>When a user navigates to our app using a web browser, they will be requesting files from our <code>views</code> directory. Each view will have dynamic data behind it, so a developer will be modifying the respective controller, which are in the <code>controllers</code> directory.</p><p>That's all we need to know about this for now.</p><br />
<h2>Changing the Landing Page</h2><p>The default landing page for out web app is a static page, specifically the <code>public/index.html</code> file.</p><p>One way we can customize the landing page is to define a dynamic route matcher. When a user enters a URL, something like <code>mysite.com/posts/12345</code>, the <code>posts/12345</code> bit is called the route.</p><p>Your app may want to listen for these routes and respond when a pattern is matched. You can define such patterns in the <code>config/routes.rb</code> file. Let's add a route matcher for request for our app's root.</p><ul><li><p>Add a route match for root</p><ul><li>Open the <code>config/routes.rb</code> file</li>
<li><p>Add this line near the top</p><pre class="prettyprint lang-rb"><code>root :to => "posts#index"
</code></pre></li>
<li><p>Save the file</p></li>
</ul></li>
</ul><p>Now, Rails will only use our routes mapping for a root request only if the <code>public/index.html</code> file doesn't exist. So, to activate our root mapping, we have to hide this file. Let's just change its name.</p><ul><li><p>Hide the default root file</p><ul><li><p>Rename the root file by executing this from the root directory</p><pre class="prettyprint lang-bsh"><code>$ mv public/index.html public/index.html.bak
</code></pre></li>
</ul></li>
</ul><p>When we start our app and navigate to the root page, we should see the index page for our posts.</p><br />
<h2>Testing Our App Locally</h2><p>Most of initializing and starting our app locally involves database preparation. We used Rails' <code>scaffold</code> command to add a Post object to our app. This command also added entries to our database management scripts, which is convenient. All we're left to do is run those scripts. Then, we can start our app server and test it with a web browser.</p><ul><li><p>Create the database</p><pre class="prettyprint lang-bsh"><code>$ rake db:create
</code></pre></li>
<li><p>Run database migrations</p><pre class="prettyprint lang-bsh"><code>$ rake db:migrate
</code></pre></li>
<li><p>Start the app</p><pre class="prettyprint lang-bsh"><code>$ rails server
</code></pre></li>
</ul><p>When you navigate to <code>0.0.0.0:3000</code> in your web browser, you should see a boring page that says "Listing posts" and has a "New Post" link.</p><p>Congrats! We have a working app!</p><br />
<h2>Conclusion</h2><p>After following the Rails walkthrough as organized by Chris Kemp, it seems like a pretty simple platform on which to create web apps. Do I dare say that it seems to be as simple as making Force.com apps?</p><p>Yes, figuring out what you need to start working on a Rails app and setting up your environment is a bit of work, but once prepared, customizing your app seems to be really straight-forward.</p><p>I'm curious how web apps manage user authentication and permissions, so I'll be looking into that next. It looks like Rails has good support for two authentication frameworks/libraries, called Devise and OmniAuth. I have no idea how these work, but the RailsApps group seems to have a good <a href="http://railsapps.github.com/tutorial-rails-mongoid-devise.html">tutorial on creating a Rails app which uses Devise and Mongoid</a>, so I'll be looking at this next.</p><p>Thanks for reading!</p>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-41451001713784878352013-01-02T21:39:00.001-06:002013-02-03T13:41:28.669-06:002012 Retrospective<h3>Last Year's Accomplishments</h3><br />
One of my priorities in life is to keep improving and growing. So, to see my progress, motivate me, and give me self-confidence, I like to review my accomplishments from the last year, both big and small.<br />
<br />
I'll list everything I'm proud of, in no order. Then I will choose a few significant ones to elaborate:<br />
<br />
<ul><li>Spoke at a national software conference</li>
<li>Travelled to Vietnam and Taiwan</li>
<li>Fixed a falling garage door opener with my friend</li>
<li>Replaced the retaining wall for my home's egress window with my dad</li>
<li>Contributed to an open source software project</li>
<li>Significantly improved my JavaScript skills, which is still only the first few steps down that road</li>
<li>Learned more about my personality and moods</li>
<li>Set up a mini home server computer</li>
<li>Learned how to host a personal web server</li>
<li>Started learning how to use vim to edit text files for blog posts and what not</li>
<li>Dramatically improved my Chinese skills since really starting 12 months ago</li>
<li>Watched every episode of Star Trek to build my nerd-cred</li>
<li>Learned about early and middle history of Vietnam</li>
<li>Started learning about early history of China</li>
<li>Attained first rank (5-kyuu) in aikido after 6 months of training</li>
<li>Built better posture and self-confidence after lifting weights for 6 months</li>
<li>Improved straight-blade shaving skills by experimenting with variables of the art</li>
<li>Made home-made yogurt and bread. Failed attempt at brewing kombucha, but I learned from it</li>
<li>Bought many kinds of loose-leaf tea and became a fan of tea</li>
<li>Attended an anime convention with friends, which was an interesting experience</li>
<li>Was a member of the wedding party for a best college friend's wedding, which was an honor and humbling</li>
<li>Authored a programming article that was published on a major site</li>
</ul><br />
<h3>Spoke at National Conference</h3><br />
In the middle of summer last year I picked up an open source project that helps build HTML5 mobile apps on Salesforce to learn more about the practice. I believe I was the only one publicly making noise about the project by posting blog posts and tweets about it. The developer relations group in Salesforce announced a Call for Papers for Dreamforce 2012, which is the first year they included heavy support for both developers and the developer community at the huge conference. I thought I was qualified to talk about this open source project at Dreamforce, so I crafted an idea for a talk and an abstract to submit. A few weeks later, my talk was accepted. I was humbled for being chosen, excited for the opportunity, and scared of failure. I spent many hours in the next 2-3 months creating my talk, revising it, practicing it, and revising and practicing it more.<br />
<br />
I finally found myself on the airplane to San Francisco. The night before my talk, I found myself in my hotel room practicing and still revising my talk. My talk was at noon on Wednesday, so I arrived at the conference hall an hour early to prepare my room. The conference employees told me that I must be in a room watching Salesforce's CEO give his keynote talk and ushered me into a room to watch it. His keynote talk was more than thirty minutes long, which meant that I was able to get into my room at 11:45am.<br />
<br />
This proved to be not enough time since I found that my netbook was not compatible with the laptop presentation system in the room. I had to move my slide deck from my Linux laptop onto the Windows laptop which was provided with the room. I later found that several important images did not survive the file conversion to Windows' file format. I meant to create suspense before showing a slide with an image, but the audience laughed when I switched to blank slide and the suspense fell to the floor.<br />
<br />
There was one other major issue with my presentation. My talk was about writing HTML5 applications, which are rendered in a browser. When I demonstrated how to use the software by opening the app in a browser, I was given a blank screen. Where was my app? I decided that the code must be wrong, and rather than debug the app, it would be best to continue. My second code demonstration fell to the same fate. What to do? Luckily, an audience member suggested I try a different browser. After spending a few empty minutes on stage loading my app into a new browser, I found it worked perfectly. Nice! This was hardly my fault, since I was forced to use a different laptop just minutes before starting my talk. An educational experience, to be sure.<br />
<br />
In the end, it was a pretty great talk. At least five of the 150+ attendees stepped up after my talk to shake my hand and pay me compliments. One person later said that it was one of the most educational talks he attended at the four-day conference. How nice! <br />
<br />
<h3>Visited Vietnam and Taiwan</h3><br />
From San Francisco, I skipped the last day of the conference and flew directly to Vietnam to attend a friend's wedding. I arrived on Saturday morning, and fought occasional punches of jet lag all day. The wedding was elaborate and I was humbled to be able to be there. I think is the most international wedding I will ever attend. The bride and grooms classmates from Japan, family from Vietnam and Scotland, and friends from even more countries were present to give their best wishes.<br />
<br />
After the wedding, three of us travelled around Vietnam. One friend from Vietnam was an expert tour guide and served as a translator, a near-necessity when travelling to the smaller towns. With my other friend from Taiwan, the three of us saw remnants of the last war, climbed mountains, visiting a historic trading town, and saw dragons dancing for the autumn festival. And the food - the food was incredible. So many vegetables. Loved the variety.<br />
<br />
After returning, I saw that my company had a two-day holiday for Thanksgiving, so I had to capitalize on the chance to take another extended vacation by spending a week in Taiwan to visit my friend there. Together, we again had the best vacation ever. We ate interesting food every day, climbed a mountain, saw the sun rise above the clouds, saw various sea creatures on a coral island, visited a town that was the site of the first trading settlement in the country. Awesome variety. We rode bullet trains, normal trains, subways, tour buses, city buses, taxis, and cars, not including the airplane home. My lust for adventure with friends was satisfied again. So awesome.<br />
<br />
<h3>Build Some Muscle and Learned Some Aikido</h3><br />
I joined a gym and an aikido dojo in July last year, both at about the same time. After spending nearly two years building my software engineering knowledge, I felt a growing desire to build myself physically. I was a thin and kinda awkward guy, so I made a conscious decision to commit to building upper-body muscle and to learn a martial art. I visited a few different martial art dojos a year or two before, and I decided at that time that aikido was the best fit for me.<br />
<br />
I'm proud to say that I was successful with both commitments. The first few months were very difficult, mostly because I started a strict diet while lifting heavy weights 4-5 times a week and also spending 4-5 days a week training aikido. I arrived home at 8pm every day completely physically exhausted. After stopping the strict food diet, I learned that the persistant lack of energy I experienced was caused by a lack of proper nutrition. I added fat and more calories into my diet, and my mood and energy quickly returned. It was an educational experience, to be sure.<br />
<br />
Several times during this six months I was very close to quitting aikido. However, I chose to obey my commitment and I continued my training. The dojo follows a very traditional Japanese style. In our dojo, this means that there is a deep respect for older students and a deeper respect for dojo rules. This respect manifests in the form of ceremony and behavioral corrections by older students. I felt very restricted by the rules. I wanted to ask questions about how to perform a technique, but I was discouraged from doing so. Instead, I was told that aikido is something that your body learns by doing and explanations will not help. I also felt restricted from learning by experimenting. When I did experiment, I was told "You're doing it wrong. The technique is supposed to go like this". This was frustrating because everyone did the same technique differently and had different opinions, but I couldn't create my own flavor. It was a restrictive environment indeed.<br />
<br />
After six months of training, however, I took my first test and attained the first for adults (5-kyuu). The test consisted of only 5 techiques, so I wasn't too worried about it. I practiced the same damned techniques so many times in the previous six months that I think I would go crazy if I was forced to practice them again. Despite my low expectations for satisfaction from the test, I surprised myself by how happy I was to pass. After the test, sensei said, "Alex", and I stood up. He said "Pass!" with the same serious and straight face he always had, and then gave his comments on my test. "That was some good stuff." This was the third time in six months that sensei ever directly addressed me. The last time he did so he asked me how old I was. His only response was a deep, dark laugh and then he moved on to train with another student.<br />
<br />
I derived zero fun from my time at the aikido dojo. I actually disliked class. I didn't dislike the ceremony, but rather, it was the zero-explanation behavioral fixes from older students that I grew to dislike. I really didn't like operating in the dojo under the constant fear of breaking "the code". This refers to the proper code of conduct, which only a few older students knew. They grew to understand sensei's true wishes because they trained with him the longest and they have a special relationship with him. It's very much like a religion in which a priest claims he communicates with God. Because the priest is closest to God, you have no choice but to believe and obey what he says. This situation bothered me quite deeply, and I really questioned my long-term compatibility with the dojo.<br />
<br />
Having said all this, studying aikido at the dojo was still a very educational experience. I learned a lot about commitment and about situations that don't agree with me. This will be a good point of reference for later in life, to be sure.<br />
<br />
<h3>Focus for 2013</h3><br />
Having considered my progress in 2012, I would like to create areas of focus for my progress in 2013.<br />
<br />
Last year, I made great progress in my career. I wanted to become an expert in my area, and I think I came close to attaining that. Am I satisfied with 2013 completely? No, not completely. One of my problems is that I push people away from me so that I can focus on satisfying my curiousity in software. However, I notice that one thing hasn't changed at all since last year: my friend count. While I have gained new friends in my career path, I haven't made new friends outside of work. Is this ok? I feel like I should diversify. Therefore, one of my areas of focus in 2013 is to spend less time learning about software and spending more time building friendships. This will be very challenging for me. I must recruit a friend to help me with this.<br />
<br />
The second area of focus for me next year is to become even better at Chinese. I have made a lot of progress in the last year with my casual Chinese studies. I can recognize words when listening to songs, I can get a rough idea of a casual Chinese sentence on the Internet, I know many basic conversation words, and I have a pretty decent vocabulary about things I do. However, this opinion is all relative to my geographic location, which is Fargo, ND. If I want to get serious about learning Chinese, I would have to move to China. Anyways, I want to focus more on my Chinese studies next year.<br />
<br />
This concludes my 2012 retrospective and 2013 goals. I'm all about encouragement in the right direction and setting attainable goals. I believe I've made my goals plenty general. Good luck to me and to you in 2013!<br />
<br />
Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-25171016570152663282012-11-18T10:42:00.003-06:002013-07-27T01:42:24.801-05:00Creating a Public Web Server on Raspberry Pi<h3>Foreword</h3><br />
I bought a <a href="https://en.wikipedia.org/wiki/Raspberry_Pi">Raspberry Pi</a> a few months ago with the intent to have a toy web server at home. It's cheap (~$30), low-power (~3W), and doesn't need a cooling fan, so it's the ideal toy server that I won't feel bad about running 24/7. It has an ARM processor instead of the more common x86 processor, so you can only install certain OSes on it. Windows, Mac OSX, or most distributions of GNU Linux are normally compiled to create binaries that execute on x86 processors, so these can't be used. The Linux kernel <i>can</i> be compiled to create ARM binaries, so certain Linux distributions are compatible with this small computer. The OS that was custom-made for this small computer is called Raspbian, which is what I am using, but you can also install <a href="http://elinux.org/RPi_Distributions">other OSes</a>, including Android, Fedora, and XBMC.<br />
<br />
I don't have an HDMI computer display, nor an HDMI adapter, so I will be setting up my web server by sending terminal commands via SSH. Some people may be off-put by the terminal, but I'll try to stay organized so we don't get lost.<br />
<br />
I spent a lot of time reading about the details of networking, remote administration, and web server setup and best practices. This project is certainly the entrance to a rabbit hole of other interesting details you don't normally think about on a daily basis. Even though I spent a terrible amount of time, I gained a great deal of satisfaction.<br />
<br />
<h3>Prerequisites</h3><br />
I performed a bit of setup before starting this project, so you may need to:<br />
<ul><li>Download and install the Raspbian OS onto an SD card</li>
<li>Play around a bit on the OS, read the provided introductory Linux documentation</li>
<li>Install some packages you like using <code>apt-get</code>, such as Git</li>
<li>Connect power to the Raspberry Pi and connect it to your home network</li>
<li>Connect your laptop via ethernet to same network as Raspberry Pi</li>
</ul><br />
Before we can begin, we must connect our Raspbery Pi to our network, as stated in the prerequisites.<br />
<br />
Our first step is to ensure we can interface with our Raspberry Pi's OS. I will not be using a keyboard, mouse, or monitor directly connected to the device, instead I will be interfacing with it by sending terminal commands to it by using SSH. The device declares a default <a href="https://en.wikipedia.org/wiki/Hostname">hostname</a> of <i>raspberrypi</i>, which the local network can use to uniquely identify your device. This is very convenient because we don't have to find the IP address that the router assigned to it, we only have to request <code>raspberrypi</code> from the network to send it requests. If you want to change the device's hostname, there are <a href="http://www.ducea.com/2006/08/07/how-to-change-the-hostname-of-a-linux-system/">ways to this</a>.<br />
<br />
<ul><li>Connect to your Raspberry Pi with SSH<br />
<ul><li>From your laptop, open a terminal and enter <code>ssh pi@raspberrypi</code>, where <code>pi</code> is the name of the user account you want to use.</li>
<li>Enter your password, which is <code>raspberry</code> by default.</li>
<li>Your current directory is the <code>pi</code> user's home directory. Any subsequent commands will be executed on the Raspberry Pi.</li>
<li>Type <code>exit</code> to quit your SSH session.</li>
</ul></li>
</ul><br />
<h3>Install a Web Server</h3>There are a number of web server applications out there, such as <a href="https://en.wikipedia.org/wiki/Apache_HTTP_Server">Apache</a>, <a href="https://en.wikipedia.org/wiki/Nginx">Nginx</a>, <a href="https://en.wikipedia.org/wiki/Jetty_(web_server)">lighttpd</a>, and <a href="https://en.wikipedia.org/wiki/Jetty_(web_server)">Jetty</a>. Apache is by far the most popular web server, so you may want to try that, but I'll be using Nginx because I like to feel unique. Actually, I can think of a good reason: Rumors say that Nginx has a small memory footprint. This is a good match for a low-memory computer like the Raspberry Pi.<br />
<ul><li>Install the Nginx package from the default Raspbian repositories.<br />
<br />
<ul><li>Just to verify we don't already have Nginx on this device, type <code>which nginx</code> into the SSH terminal. It should give us no result.</li>
<li>Update our <code>apt-get</code> sources by typing <code>sudo apt-get update</code></li>
<li>Just to verify that Nginx is in the default repositories, type <code>apt-cache search nginx</code>. We should see several results, including the simply-named 'nginx' package.</li>
<li>Install the Nginx package by typing <code>sudo apt-get install nginx</code>.</li>
</ul></li>
<li>Validate that Nginx is installed<br />
<br />
<ul><li>Type <code>service --status-all</code>. We should see an entry called 'nginx'.</li>
</ul></li>
</ul><br />
<h3>Web Server Security</h3>We just set up an application on our computer that we intend to welcome requests from the rest of the internet, which can be quite dangerous, so let's add some security. Like a good parent, we need to tell our web server to not talk to strangers. Because this is one of my first web servers, I will refer to the web server security guide on <a href="http://library.linode.com/securing-your-server">Linode</a> for all security advice, mostly because it seems to be well-written.<br />
<br />
Because some people make their careers as penetration specialists, I have always been curious about server protection best practices. Some of the protection I will set up may be redundant in a home server situation, but it is never redundant if new knowledge is gained! If I'm missing some important steps, please tell me - I would love to know!<br />
<br />
<h4>Secure User Accounts</h4>My Raspberry Pi had a default user named 'pi' which I have been using. One concern with web servers is that if a hacker gains control of a web request process, it can run under the same user permissions as the process owner. I want to ensure that Nginx request handling processes be owned by a limited-permissions user.<br />
<br />
Before that, we have one more important change: Change the default password of the default user. We will later expose this server's SSH port to the public internet, and an stranger on the internet might try a set of default username/passwords.<br />
<ul><li>Change the default password of the default user<br />
<ul><li>Open an SSH terminal into your Raspberry Pi</li>
<li>Type <code>passwd pi</code> to change the password for the device's default user account.</li>
<li>Enter the existing password, the new password twice, and hit enter to save the change.</li>
</ul></li>
</ul><br />
Nginx is a master process that spawns worker processes to handle multiple web requests. The Apache community says that the 'www-data' user should handle web requests, so we'll do the same. While the Nginx master process runs as 'root' user, each worker process runs as the 'nobody' user by default. The worker process owner can be customized in the <code>/etc/nginx/nginx.conf</code> file.<br />
<ul><li>Ensure web request processes run as limited-permission user<br />
<br />
<ul><li>Open an SSH terminal into your Raspberry Pi</li>
<li>Enter the following command to check the Nginx default user: <code>nano /etc/nginx/nginx.conf</code>.</li>
<li>Ensure the first line of this configuration file is <code>user www-data;</code>.</li>
</ul></li>
<li>Use SSH key pair authentication instead of passwords<br />
<br />
<ul><li>On your laptop, open a terminal</li>
<li>Generate an SSH key pair by typing <code>ssh-keygen</code></li>
</ul></li>
</ul><h4>Set up a Firewall</h4>We should also set up a firewall to protect our computer from port scanners and other malicious programs. A firewall is basically a set of rules that limits or blocks incoming or outgoing web requests. After a bit of research, it seems that a tool called <a href="http://www.netfilter.org/">iptables</a> is the most popular solutions for this. It is also the solution proposed <a href="http://library.linode.com/securing-your-server#sph_creating-a-firewall">Linode's server security guide</a>.<br />
<br />
The Raspberry Pi firmware version that I have isn't compiled with iptables support, so you may have to upgrade your firmware first. Luckily, it was is simple as a few terminal commands if we use the <a href="https://github.com/Hexxeh/rpi-update">rpi-update</a> tool that a user named Hexxah has created.<br />
<ul><li>Upgrade your Raspberry Pi's kernel<br />
<ul><li>Open an SSH terminal into your Raspberry Pi.</li>
<li>Get the convenient upgrade script by running <code>sudo wget http://goo.gl/1BOfJ -O /usr/bin/rpi-update && sudo chmod +x /usr/bin/rpi-update</code>.</li>
<li>You may need the ca-certificates package to make a request to GitHub, so run this <code>sudo apt-get install ca-certificates</code>.</li>
<li>Finally, to upgrade your Raspberry Pi's firmware, run <code>sudo rpi-update</code>. This will take ~5 minutes.</li>
</ul></li>
</ul><br />
Now that our Raspberry Pi has the iptables program, let's set it up.<br />
<ul><li>Set up a firewall<br />
<ul><li>Open an SSH terminal into your Raspberry Pi.</li>
<li>Check your default firewall rules by running <code>sudo iptables -L</code>.</li>
<li>Add firewall rules by creating a file by running <code>sudo nano /etc/iptables.firewall.rules</code>.</li>
<li>Copy and paste the basic rule set below into this file and save it:</li>
</ul></li>
</ul><br />
<i>/etc/iptables.firewall.rules</i><br />
<br />
<pre class="prettyprint lang-bsh"><code>*filter
# Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT -d 127.0.0.0/8 -j REJECT
# Accept all established inbound connections
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow all outbound traffic - you can modify this to only allow certain traffic
-A OUTPUT -j ACCEPT
# Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL).
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
-A INPUT -p tcp --dport 8080 -j ACCEPT
# Allow SSH connections
#
# The -dport number should be the same port number you set in sshd_config
#
-A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT
# Allow ping
-A INPUT -p icmp -j ACCEPT
# Log iptables denied calls
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
# Drop all other inbound - default deny unless explicitly allowed policy
-A INPUT -j DROP
-A FORWARD -j DROP
COMMIT
</code></pre><br />
<ul><li>Set up a firewall (continued)<br />
<br />
<ul><li>Load the firewall rules by running <code>sudo iptables-restore < /etc/iptables.firewall.rules</code>.</li>
<li>Verify the rules have been loaded by running <code>sudo iptables -L</code>.</li>
<li>Now, to load these firewall rules every time the network adaptor is initialized, make a new file in the network adaptor hooks by running <code>sudo nano /etc/network/if-pre-up.d/firewall</code>.</li>
<li>Save the following text in this file:<br />
<br />
<code>#!/bin/sh</code><br />
<br />
<code>/sbin/iptables-restore < /etc/iptables.firewall.rules</code></li>
<li>Finally, make this script executable by running <code>sudo chmod +x /etc/network/if-pre-up.d/firewall</code>.</li>
</ul></li>
</ul><h4>Defend Against Brute-force Attacks</h4>One more thing we should be worried about is internet users attempting to access our SSH account by trying a dictionary attack against our password. There is a handy utility called <a href="http://en.wikipedia.org/wiki/Fail2ban">Fail2Ban</a> that monitors your log files for failed login attempts and temporarily blocks offending users.<br />
<br />
<ul><li>Install and configure Fail2Ban<br />
<ul><li>Install the Fail2Ban packages by running <code>sudo apt-get install fail2ban</code>.</li>
<li>You can do different customization, but I just followed the recommendations of <a href="http://snippets.aktagon.com/snippets/554-How-to-Secure-an-nginx-Server-with-Fail2Ban">these code snippets</a> to add Nginx monitoring to Fail2Ban.</li>
</ul></li>
</ul><br />
<h4>Secure SSH</h4><br />
I told my firewall to allow traffic through port 22, which is SSH. This means anybody, not only I, can try to SSH into my Raspberry Pi. To better secure this, we have already took two good steps: 1) changed the password for the default user, and 2) protect from brute-force attacks. But I want to do one more thing.<br />
<br />
<ul><li>Restrict root for SSH<br />
<ul><li><code>sudo vi /etc/ssh/sshd_config</code></li>
<li>Change the <code>PermitRootLogin</code> line like this:<br />
<br />
<code>PermitRootLogin without-password</code></li>
</ul></li>
</ul><h4>Update the Server's Software</h4>A best practice for server admins is to ensure all server software is kept up-to-date. This is really easy in a Linux system like this, so there's no excuse. You should do this about once a month, or whenever you think of it.<br />
<ul><li>Update all installed packages<br />
<ul><li>Open an SSH session with your Raspberry Pi.</li>
<li>Update your index of available packages and versions by running <code>sudo apt-get update</code>.</li>
<li>Update your OS's installed softare by running <code>sudo apt-get upgrade</code>. This took ~10 min for me.</li>
</ul></li>
</ul><h3><br />
</h3><h3>Request a Page from Your Private Web Server</h3>Now that we have done our due diligence by securing our web server, let's start up Nginx.<br />
<ul><li>Check the default Nginx config file by running <code>sudo nano /etc/nginx/sites-enabled/default</code>.<br />
<ul><li>I am fine with the default setup. I just changed it to listen on port 8080.</li>
</ul></li>
<li>Start the server<br />
<ul><li>Open an SSH terminal and run this on the Raspberry Pi: <code>sudo service nginx start</code>.</li>
</ul></li>
<li>Check the web server<br />
<ul><li>The default static web directory is <code>/usr/share/nginx/www/</code>.</li>
<li>Open a browser on your laptop and navigate to <code>raspberrypi</code> in a web browser. You should see the default <i>index.html</i> file created by Nginx, which says something like "Welcome to Nginx!".</li>
</ul></li>
</ul><h3>Request a Page from Your Public Web Server</h3>The final hurdle is to make your Raspberry Pi accessible to the rest of the world. My router is not forwarding requests to the Raspberry Pi, so we will need to do some port forwarding. I added the <a href="https://en.wikipedia.org/wiki/DD-WRT">DD-WRT</a> firmware to my router quite a while ago, so you may need to find a more specific guide for adding port forwarding for your specific router.<br />
<br />
<h4>Make Server Visible by Public IP Address</h4><ul><li>Navigate to your router by IP. I enter 192.168.1.1 into my browser.</li>
<li>Find the Port Forward settings<br />
<ul><li>Add a new entry called <i>rpi-web</i>. <i>FromPort=8080</i>, <i>ToPort=8080</i>, <i>IpAddress=(Raspberry Pi internal Ip)</i></li>
</ul></li>
</ul><br />
Even with the port forwards, my web server still wasn't accessible by its public IP. With a friend's help, I tried putting it into my router's <a href="http://en.wikipedia.org/wiki/DMZ_%28computing%29">DMZ</a>, which fixed the problem. It seems the purpose of a DMZ machine is to be the middle ground between your trusted/local network and the enemy/public network. This makes the DMZ machine almost wholly visible to the public internet. You may also want to try this on your router if you are having problems.<br />
<ul><li>Put your server in your router's DMZ.<br />
<ul><li>I have DD-WRT firmware on my router, so your steps may be different.</li>
<li>Open the web UI for your router by navigating to its IP address in a web browser.</li>
<li>Go to the <i>NAT/QoS</i> tab, then the <i>DMZ</i> tab.</li>
<li>Set <i>Use DMZ</i> field to <i>Enable</i>.</li>
<li>Set the internal IP of your web server in the <i>DMZ Host IP Address</i> field.</li>
</ul></li>
</ul><br />
We now have our Raspberry Pi visible to the rest of the world on ports 80, 443, and 8080. Congratulations. Try sending a request to your web server by its public IP. You can find your public IP by using an internet site like <a href="http://www.blogger.com/www.ipchicken.com">IpChicken</a>. If your external IP address is <i>123.45.67.890</i>, then enter <i>123.45.67.890:8080</i> into your web browser to get the default Nginx <i>index.html</i> page.<br />
<br />
<h3>Add Dynamic IP Solution: No-IP</h3><br />
My public IP address changes quite often, so it's impossible for me to SSH into my Raspberry Pi from work. To solve this, I wanted a domain name that points to my changing IP address. A technique called Dynamic DNS can help me. This involves registering a domain name with a third-party and periodically updating my server's registered IP address with a simple app. The third-party I chose to use is called <a href="http://www.blogger.com/www.noip.com">No-IP</a>. The No-IP updater app is built-in to some routers, so you should check there first. My router has this capability, which is filed under a category called DDNS, but it isn't working. So, I want to try installing the app on my Raspberry Pi. Let's give it a try.<br />
<br />
<ul><li>Install the No-IP updater client<br />
<br />
<ul><li>Download the package into a new directory and unarchive it<br />
<br />
<code>mkdir ~/downloads</code><br />
<code>wget http://www.no-ip.com/client/linux/noip-duc-linux.tar.gz</code><br />
<code>tar vzxf noip-duc-linux.tar.gz</code><br />
</li>
<li>Compile the source code<br />
<br />
<code>cd noip*</code><br />
<code>sudo make</code><br />
<code>sudo make install</code></li>
</ul></li>
</ul><br />
During compilation, you will be prompted for your noip.com username and password, as well as a refresh interval (I chose 30)<br />
<br />
<h3>Conclusion</h3>That was quite an educational project. I have a new appreciation for managed web hosts, because I believe they are responsible for managing the server's security, network visibility, kernel upgrades, etc.<br />
<br />
What's next? I want to set up other stuff on this little server, such as <a href="http://gitlabhq.com/">GitLab</a>, <a href="http://www.redmine.org/">RedMine</a>, and <a href="http://tiddlywiki.com/">TiddlyWiki</a>.Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com5tag:blogger.com,1999:blog-8410610130228839275.post-82507900065370413952012-11-10T16:05:00.003-06:002013-02-03T18:44:54.824-06:00Sunday Project: Force.com Spring App on Heroku<h1>Sunday Project: Force.com Spring app on Heroku</h1><br />
<br />
In this article, I will be using:<br />
<ul><li>Ubuntu 12.04</li>
<li>Java 1.6 - OpenJDK Runtime/IcedTea6 1.11.5 (installed before, not sure of source)</li>
<li>Eclipse Indigo (installed before, not sure of source)</li>
<li>Eclipse Heroku Plugin 1.0.1</li>
<li>Git 1.7.9 <em>or</em> Eclipse eGit Plugin 1.3</li>
</ul><br />
I will be following <a href="http://www.youtube.com/watch?v=qWsdlcyk_0M">this tutorial</a>, which was presented at Dreamforce 2012. This training session was wonderfully presented by Anand B Narasimhan <a href="https://twitter.com/anand_bn">@anand_bn</a> and Richard Vanhook <a href="https://twitter.com/richardvanhook">@richardvanhook</a>. I wanted to condense this 2+ hour session into just the steps required to make a Spring app on Heroku.<br />
<br />
Update: I added the resulting code into a <a href="https://github.com/chexxor/DF12-Forcecom-Spring-Heroku-Template-App">public repository on GitHub</a> for your reference.<br />
<br />
<em>(Note: This article was my first attempt at using markdown as a formatting engine. I grabbed the HTML and pasted it into Blogger, then fixed some spacing. Markdown limits you to six choices for headings, bullet point and number lists, and horizontal rules, so it's kinda restrictive. At this time, I'm not sure how to format this article better, so any tips would be nice.)</em><br />
<br />
What I learned, and what you may also gain from this article:<br />
<ul><li>The Heroku Eclipse plugin greatly simplifies creating/developing Heroku apps.</li>
<li>Project templates are gold, and save hours of frustration and configuration.<br />
I have been horrified by the time required to go from an empty project to a working application in Java.</li>
<li>Embedded web container seems like a great idea. If system/environment admins don't have to set up the web server, there is less risk for failure when deploying to different environments. Moving one thing is so much easier than moving two things and ensuring they cooperate.</li>
<li>The project template uses a Java library called <a href="https://github.com/ryanbrainard/richsobjects">RichSobjects</a> for talking to Salesforce. I haven't heard of this library before, but I'm making a mental note to check it out later if I need a Salesforce API library.</li>
</ul><br />
<hr /><br />
<h3>Prerequisites</h3><br />
<ul><li>Install the Eclipse Heroku Plugin<br />
<ul><li><a href="https://devcenter.heroku.com/articles/getting-started-with-heroku-eclipse">Official Heroku guide</a>. But I will detail the steps here, also.</li>
<li>Link to the plugin binaries by going to <strong>Help > Install New Software</strong> and clicking <strong>Add</strong>.</li>
<li>Name = <em>Heroku</em>, Location = <em>https://eclipse-plugin.herokuapp.com/install</em> and follow prompts.</li>
<li>To set up the plugin, go to <strong>Window > Preferences</strong> and find the <strong>Heroku</strong> section on the left.</li>
<li>You will need a Heroku account, which is free. I made an account by using <a href="https://api.heroku.com/signup/devcenter">this wizard</a>.</li>
<li>Get a Heroku API key by entering your Heroku credentials in the <strong>Email</strong> and <strong>Password</strong> fields and clicking <strong>Login</strong>.</li>
<li>The Heroku Plugin found my SSH key, because it is in a default location. If it's empty, follow the Heroku guide above to generate a public/private RSA key pair. Then return to the Heroku settings in Eclipse to generate an SSH key.</li>
</ul></li>
</ul><br />
<br />
<h3>Create new Heroku app</h3><br />
<ul><li>Note: You can import an existing Heroku app by going to <strong>File > Import</strong> and selecting <strong>Heroku</strong>.<br />
However, I will be creating a new app through Heroku.</li>
<li>Tell Eclipse to set up a new Heroku project for you by going to <strong>File > New > Project...</strong><br />
and select the <strong>Create Heroku App from Template</strong>.</li>
<li>Select <strong>Force.com connected Java app with Spring,OAuth</strong> and leave <strong>Application Name</strong> blank, as this name must be unique across all Heroku apps. If it's blank, Heroku will create a cool name for you.<br />
<ul><li>This sends a request to Heroku to set up an app for you and puts all the code in a git repository that Heroku manages.</li>
<li>Eclipse will clone this Git repo locally and expose it as an Eclipse project.</li>
</ul></li>
</ul><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZQkgeqilLcxc5y2K-Buk2Edw1lM_72fvN9s8lBgYFYjNSn36TugunBupXfWFBSsQr-O-ds2oBC7QH926z6fMe909dl1fHFrMG67yJ7u5DNJtEnIpwgWlmzpMze2R1dLrQOegr-RF3fwY/s1600/NewHerokuProjectTemplates.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="294" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZQkgeqilLcxc5y2K-Buk2Edw1lM_72fvN9s8lBgYFYjNSn36TugunBupXfWFBSsQr-O-ds2oBC7QH926z6fMe909dl1fHFrMG67yJ7u5DNJtEnIpwgWlmzpMze2R1dLrQOegr-RF3fwY/s320/NewHerokuProjectTemplates.png" width="320" /></a></div><br />
<br />
<h3>Inspect what we have</h3><br />
<ul><li>This is a Maven project, so look at <code>pom.xml</code> to see dependencies:<br />
<ul><li><strong>Spring</strong><br />
<ul><li>spring-context</li>
<li>spring-webmvc</li>
<li>jstl</li>
<li>standard</li>
<li>javax.servlet-api</li>
</ul></li>
<li><strong>Salesforce</strong><br />
<ul><li>richsobjects-core</li>
<li>richsobjects-api-jersey-client</li>
<li>richsobjects-cache-memcached</li>
<li>force-oauth</li>
<li>force-springsecurity</li>
</ul></li>
<li><strong>Tomcat</strong><br />
<ul><li>webapp-runner</li>
</ul></li>
<li><strong>Logging</strong><br />
<ul><li>jcl-over-slf4j</li>
<li>slf4j-simple</li>
</ul></li>
</ul></li>
</ul><br />
<em>There are many code files and settings files in this project, so it's hard to see what's going on. This is different from Force.com applications, in which only code is exposed to the developer, and settings are normally in the environment and changed in the UI. So, instead of looking at each component of this Java application, we are going to look at code at just the highest-level.</em><br />
<br />
<ul><li>To see code that calls Force.com, see <code>ContactController.java</code>:<br />
<ul><li>Class annotations (<code>@Controller</code> and <code>@RequestMapping</code>) are part of the Spring framework. These<br />
instruct the framework where to inject framework code at runtime. This keeps code clean.</li>
<li>This class uses the RichSobjects library to interact with Sobjects in a Salesforce database<br />
by using the Partner API.</li>
</ul></li>
<li>To see the HTML page template we will request, see <code>contacts.jsp</code>:<br />
<ul><li>Pretty simple: it's HTML which has JSP tags, which the HTTP request handler will resolve into HTML for the user.</li>
</ul></li>
<li>To see global variables for the Spring app, see <code>applicationContext.xml</code>:<br />
<ul><li>Most values are hard-coded in this file. However, look at lines 29-33 to see yet-unresolved values. These values will be drawn from the Heroku environment, I believe.<br />
<br />
<pre class="prettyprint lang-xml"><code><fss:oauth logout-url="/logout" default-logout-success="/">
<fss:oauthInfo endpoint="http://login.salesforce.com"
oauth-key="#{systemEnvironment['SFDC_OAUTH_CLIENT_ID']}"
oauth-secret="#{systemEnvironment['SFDC_OAUTH_CLIENT_SECRET']}"/>
</fss:oauth>
</code></pre></li>
</ul></li>
<li>Where is Tomcat?<br />
<ul><li>Heroku uses idea of an embedded web container. Instead of running a web deamon as an OS process, and the Java app as a separate OS process, we unify the two pieces. We can instantiate the Tomcat web server from our Java program, using a Java wrapper called <code>webapp-runner</code>.</li>
<li><code>webapp-runner</code> makes app deployment and app start very simple.</li>
<li>Jetty is another web server that is popular to use as an embedded server.</li>
</ul></li>
<li>How to start our app?<br />
<ul><li><code>Procfile</code> has a command that Heroku can call to starts an application.<br />
<ul><li><code>web java $JAVA_OPTS -jar target/dependency/webapp-runner.jar --port $PORT target/*.war</code></li>
<li>(process name) (command to execute)</li>
</ul></li>
<li>Can have multiple processes - e.g. web, worker, and clock</li>
</ul></li>
<li>Navigate to app on Heroku<br />
<ul><li>Find the name of your app, which we allowed Heroku to decide. If it was <code>funny-name-1234</code>,<br />
navigate to this URL to see that the app is already running on Heroku:<br />
<ul><li><code>funny-name-1234.herokuapp.com</code></li>
</ul></li>
</ul></li>
</ul><br />
<h3>Set up local build for OAuth</h3><br />
<em>This application is currently set up to use OAuth to gain access to data in a Salesforce org. By default, Salesforce will not allow OAuth applications to request access, so we have to add an exception, that is, define an accessible application.</em><br />
<br />
<ul><li>To allow an external application to request access to a Salesforce org:<br />
<ul><li>Login in to the Salesforce org to which you want to connect.</li>
<li>To allow an external application access, go to <strong>Setup > Develop > Remote Access</strong>.<br />
<ul><li>Click <strong>New</strong>.</li>
<li>Choose any name in the <strong>Application</strong> field for this record. I chose <code>Heroku Local</code>.</li>
<li>Choose any email for <strong>Contact Email</strong> field.</li>
<li>Use <code>http://localhost:8080/_auth</code> for the <strong>Callback URL</strong> field.</li>
<li>Click <strong>Save</strong>.</li>
</ul></li>
<li>Navigate to the detail view of this Remote Access record to see that Salesforce<br />
generated a Consumer Key and a Consumer Secret.</li>
</ul></li>
<li>To set up our project to run locally in Eclipse, instead of only on Heroku:<br />
<ul><li>Set up a Run configuration by going to <strong>Run > Run Configurations...</strong></li>
<li>Select <strong>Java Application</strong> from the list on the left and click the plus icon at the top of this list.</li>
<li>Choose a name for this Run configuration. I chose 'web app runner'.</li>
<li>Enter your project name in the <strong>Project</strong> field, which looks like <code>funny-name-1234</code>.</li>
<li>Choose the main class for the <strong>Main Class</strong> field, which is <code>webapp.runner.launch.Main</code> for this project.</li>
<li>Go to the <strong>Arguments</strong> tab, and enter <code>src/main/webapp</code> in the <strong>Program Arguments</strong> field.</li>
<li>This application uses environment variables for OAuth. We will specify this at run-time by going to the <strong>Environment</strong> tab. This app is expecting two keys, "SFDC_OAUTH_CLIENT_ID", and "SFDC_OAUTH_CLIENT_SECRET".<br />
<ul><li>Click <strong>New</strong>.</li>
<li><strong>Name</strong> = <code>SFDC_OAUTH_CLIENT_ID</code></li>
<li><strong>Value</strong> = (value of <strong>Consumer Key</strong> field from Remote Access record we just created)</li>
<li>Save this and click <strong>New</strong>.</li>
<li><strong>Name</strong> = <code>SFDC_OAUTH_CLIENT_SECRET</code></li>
<li><strong>Value</strong> = (value of <strong>Consumer Secret</strong> field from Remote Access record we just created)</li>
<li>Save this.</li>
</ul></li>
<li>Click <strong>Run</strong>.</li>
</ul></li>
</ul><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg62nsZS4EyGO9qXARW89wqCoimTPcenz2C53jDvdomf5tcy3fpq75aDYcJU8BxgtalwydiQlNFQXP8ZrGys7OBZ8-2UMMtZILyn3lGbxyuEtCuWOWeH2XaQVfOeYrdMTb1QUqkthZ8v7E/s1600/HerokuEnvironmentVariables.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="390" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg62nsZS4EyGO9qXARW89wqCoimTPcenz2C53jDvdomf5tcy3fpq75aDYcJU8BxgtalwydiQlNFQXP8ZrGys7OBZ8-2UMMtZILyn3lGbxyuEtCuWOWeH2XaQVfOeYrdMTb1QUqkthZ8v7E/s640/HerokuEnvironmentVariables.png" width="640" /></a></div><br />
<br />
<em>After performing these steps and pressing Run, the application should be running locally on port 8080.<br />
We can see this by navigating to the <code>http://localhost:8080</code> URL in a web browser. To check that our OAuth has been set up correctly, navigate to <code>http://localhost:8080/sfdc/Contacts</code> URL. The app will redirect to a Salesforce authentication page, where you should click Allow. You will then be redirected back to the same URL in your authentication.</em><br />
<br />
<h3>Setup Heroku app for OAuth</h3><br />
<em>We added run-time variables to our environment by adding them to the Run Configuration in Eclipse.<br />
To add these variables to our app when it is running on Heroku, we must add them to another settings<br />
location in Eclipse that gets pushed to Heroku.</em><br />
<br />
<ul><li>Add another Remote Access record to the Salesforce org:<br />
<ul><li>Login in to the Salesforce org to which you want to connect.</li>
<li>To allow an external application access, go to <strong>Setup > Develop > Remote Access</strong>.<br />
<ul><li>Click <strong>New</strong>.</li>
<li>Choose any name in the <strong>Application</strong> field for this record. I used the name of<br />
Heroku app, <code>Heroku Funny Name</code>.</li>
<li>Choose any email for <strong>Contact Email</strong> field.</li>
<li>Use <code>https://funny-name-1234.herokuapp.com/_auth</code> for the <strong>Callback URL</strong> field.</li>
<li>Click <strong>Save</strong>.</li>
</ul></li>
<li>Navigate to the detail view of this Remote Access record to see that Salesforce<br />
generated a Consumer Key and a Consumer Secret.</li>
</ul></li>
<li>Open the Heroku settings for this Heroku project:<br />
<ul><li>In Eclipse, click <strong>Window > Show View > Other...</strong></li>
<li>Choose <strong>My Heroku Applications</strong>.</li>
<li>Right-click on your application in this view, and select <strong>App Info</strong>.</li>
<li>To add new environment variables to this Heroku app, choose the <strong>Environment Variables</strong> tab<br />
from this file.<br />
<ul><li>Click the <strong>+</strong> button on the right.</li>
<li><strong>Key</strong> = <code>SFDC_OAUTH_CLIENT_ID</code></li>
<li><strong>Value</strong> = (value of <strong>Consumer Key</strong> field from Remote Access record we just created)</li>
<li>Save this and click <strong>New</strong>.</li>
<li><strong>Key</strong> = <code>SFDC_OAUTH_CLIENT_SECRET</code></li>
<li><strong>Value</strong> = (value of <strong>Consumer Secret</strong> field from Remote Access record we just created)</li>
</ul></li>
<li>After adding these environment variables, the Heroku app should be immediately updated to<br />
reflect the values. If you navigate to the Contacts URL of your Heroku app, <br />
<code>funny-name-1234.herokuapp.com/sfdc/Contacts</code>, you can see that the OAuth is now working.</li>
</ul></li>
</ul><br />
<h3>Add New Feature to Your App</h3><br />
<em>To show how to add a new feature to the app, we will be adding a link to each contact's Twitter<br />
handle. Follow these steps to add this new feature.</em><br />
<br />
<ul><li>1) Add a new custom field to the Contact object:<br />
<ul><li>Login to the same Salesforce org.</li>
<li>Go to <strong>Setup > Customize > Contacts > Fields</strong>. Click <strong>New</strong> and create a new text field<br />
called <code>TwitterHandle__c</code>.</li>
<li>Save this.</li>
</ul></li>
<li>2) Query this field in our local copy of the Java project:<br />
<ul><li>In Eclipse, open <strong>ContactsController.java</strong>.</li>
<li>In the <code>listContacts</code> method, add <code>TwitterHandle__c</code> to the Select clause of the query.</li>
<li>Save this file.</li>
</ul></li>
<li>3) Expose this field value in the JSP page:<br />
<ul><li>In Eclipse, open <strong>contacts.jsp</strong>.</li>
<li>Add a new header column to this table by adding <code><th>Twitter Handle</th></code> to line 13,<br />
underneath the Email header.</li>
<li>Expose the field value in the table cells by adding <code><td>${contact.getField("TwitterHandle__c").value}</td></code><br />
to line 27, underneath the similar row for Email.</li>
<li>Save this file.</li>
</ul></li>
<li>4) Test this change in the local build of the project:<br />
<ul><li>Stop the server by opening the <strong>Console</strong> view in Eclipse and clicking the red stop button.</li>
<li>Run a new build, which will have our Twitter handle column in the Contacts page by clicking<br />
the Run button underneath the toolbar, which looks like a green arrow.</li>
<li>Navigate to the URL for this local page, which should be <code>http://localhost:8080/sfdc/contacts</code>.</li>
</ul></li>
<li>5) To commit these changes locally:<br />
<ul><li>Right-click on your project in the Package Explorer in Eclipse, select <strong>Team > Commit</strong>.</li>
<li>Add a commit message which describes the changes, and click <strong>Commit</strong>.</li>
</ul></li>
<li>6) To push these local changes to our Heroku repository:<br />
<ul><li>Right-click on your project in the Package Explorer in Eclipse, select <strong>Team > Push to Upstream</strong>.</li>
</ul></li>
</ul><br />
<h3>Managing Your App</h3><br />
<ul><li>Check status of app<br />
<ul><li>Go to <strong>My Heroku Applications</strong> view in Eclipse.</li>
<li>Right-click on one of your apps, and click <strong>View Logs</strong> to see last 1500 log lines<br />
for your app in production.</li>
<li>Heroku made a thing called "Logplex". All messages that your app produces can<br />
be accessed here. You can find third-party apps to derive information from these logs.</li>
</ul></li>
<li>Scale your app up and down<br />
<ul><li>Free dev accounts have only one dyno.</li>
<li>To scale app to >1 dynos, must tie money to account, for example, by joining<br />
app to Heroku org with money.</li>
<li>Right-click on one of your apps in <strong>My Heroku Applications</strong>, click <strong>Scale</strong> and choose <strong>3</strong>.<br />
Your app is now on a 3-node, load-balanced cluster.</li>
</ul></li>
<li>Add collaborators<br />
</li>
<ul><li>Go to <strong>My Heroku Applications</strong> in Eclipse and click <strong>App Info</strong>.</li>
<li>Go to the <strong>Collaborators</strong> tab to see all other users in your Heroku organization.</li>
<li>Either select one of these users or click the plus button on the right to add by email.</li>
</ul></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxlpWWw5ARgdOEgUKga3L6_8r-klTLwxuJ4TagrnPvLC-tZMx0pJqaBYcxqKYQY74W9dwuI5uCKOQz9MAMpSwovSrtsY189sMHazcczeX7JiFZoGyugAjHAV1-m8cHukSmnQVgo3Ro2cQ/s1600/HerokuPluginAppInfo.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="388" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxlpWWw5ARgdOEgUKga3L6_8r-klTLwxuJ4TagrnPvLC-tZMx0pJqaBYcxqKYQY74W9dwuI5uCKOQz9MAMpSwovSrtsY189sMHazcczeX7JiFZoGyugAjHAV1-m8cHukSmnQVgo3Ro2cQ/s640/HerokuPluginAppInfo.png" width="640" /></a></div><div><br />
</div><div><br />
</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com3tag:blogger.com,1999:blog-8410610130228839275.post-87648164748381915992012-05-19T17:08:00.003-05:002012-05-19T17:08:33.219-05:00Starting to Understand InheritanceSoftware Inheritance<br />
<br />
As a software engineer, as you spend more time in the profession, you will continually see software structured differently from how you would do it. Sometimes, you are just confused by the author's code, other times you understand and disagree with it, and yet other times you become so inspired by it that adopt its design. Recently, I've come across object-oriented code that makes heavy use of inheritance in its solution, and I have been actively confused by it due to its difficult reading level. I think I am finally coming to understand how to read code written in the inheritance style, and I'd like to share what I've found.<br />
<br />
Initial Issues - Internal State<br />
<br />
When first encountering inheritance, I interpreted it on a shallow level. I saw the 'extends' keyword, which means it 'inherits' from the specified class and can reference its variables and methods, but I never understood how its utilization could justify the higher maintenance cost of its slurred readability.<br />
<br />
The first issue I had with reading inheritance-based code is sharing class properties. When I create a new class, I design it to minimize references to internal state. I use static methods as much as possible. Why? Those class properties are variables, unless they use 'final' keyword, and without foresight for possible values of these class properties, your methods can produce unexpected results.<br />
<br />
So consider my surprise to see references to a base class' internal properties from an inheriting class! What could the author be thinking? Are you really sure you trust that that class-external variable will always be a valid value for your class? Holy buckets, Batman, this inheritance business seems to be a bucket of holes!<br />
<br />
Towards Better Understanding<br />
<br />
I was reading some Javascript code recently for an Ajax-y framework. The base of the app uses John Resig's <a href="http://ejohn.org/blog/simple-javascript-inheritance/">simple Javascript inheritance</a>, and subsequent objects extend from this base Class type. It seemed that the author was so influenced by inheritance that he wanted to force it into Javascript before considering his solution. I understood inheritance on a shallow level, but still - what is so necessary about inheritance?<br />
<br />
After some research, I think I'm starting to understand the case for inheritance. The explanation offered on this Wikipedia page on <a href="http://en.wikipedia.org/wiki/Differential_inheritance">Differential Inheritance</a> was quite inspirational.<br />
<blockquote class="tr_bq">
"To think of differential inheritance, you think in terms of what is different. So for instance, when trying to describe to someone how Dumbo looks, you could tell them in terms of elephants: Think of an elephant. Now Dumbo is a lot shorter, has big ears, no tusks, a little pink bow and can fly. Using this method, you don't need to go on and on about what makes up an elephant, you only need to describe the differences; anything not explicitly different can be safely assumed to be the same." - <a href="http://en.wikipedia.org/wiki/Differential_inheritance">Wikipedia: Differential Inheritance</a></blockquote>
<div style="text-align: left;">
Ah, from this point of view, inheritance is a convenient way of saying "I need a class just like that one, but a little different." In other words, it is a programmer's feature for convenient customization. A bonus feature is that the inheriting class can be used in place of the base class by using type casting. (Short thought - isn't this ability better performed by using interfaces and a platform that accepting registering custom handlers?)</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Issue Still Remains</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
While I do have a better understanding of inheritance now, my problem still exists: If MyClass inherits from BaseClass, I can't look at MyClass on its own because most of the real logic exists in BaseClass! This doesn't help when debugging these two classes, since BaseClass was designed to run one way, and MyClass might be changing the behavior of BaseClass in an incompatible manner. Sure, this may work, but it feels a bit fragile and unnecessary. I wonder if my understanding of the benefits of inheritance-utilizing code will grow as I continue to read it, but I hope it isn't a poison that infects my style.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com2tag:blogger.com,1999:blog-8410610130228839275.post-68851748619262025252012-04-15T14:17:00.000-05:002012-06-20T07:52:12.281-05:00On Recording Growth - Studying vs BuildingMy career choice is a software engineer, a knowledge-based career. While it is easy to show a product for the time I spend writing software, it isn't so easy for time studying it. With studied knowledge, I may be able to hold more intelligent discussions about software engineering, but if I disappear tomorrow, will my time investment in reading about it have any lasting impact on the world around me? No - all that knowledge is locked inside my head. This is where blogging can be so important, and why I value it.<br />
<br />
From Studying to Building<br />
<br />
I haven't been blogging much lately, which doesn't please me. Why haven't I been blogging? When I was regularly blogging, the process was composed of spending time studying software topics, then transferring my realizations into blog posts. A month or two ago, I started writing a software tool to help develop and deploy Force.com code, and I stopped studying software topics. I haven't completed a useful piece of software before, not including the one-off solutions I write for my job, and I wanted to test my abilities to see if I am able to actually write something useful on my own. As I continued with the project, I found my abilities to be lacking, which surprised and angered me.<br />
<br />
And so my obsession was triggered. If I can't write a simple little software tool like this, can I really call myself a skilled software engineer? I think I can't. A simple tool like this shouldn't be so difficult, and yet I can't finish it or reach a useful state! When I'm not banging my head against my laptop screen in frustration when hacking on this code, I'm thinking about how the failed project trumpets my ineptitude. Since starting, it has consumed most of my free evenings, and my obsession with this code leaves me unable to study the higher aspects of software engineering, leaving me with no material for my blog. If only I can finish this program, I will have evidence that I can write useful software, and I will be able to forget the huge time investment I made.<br />
<br />
Expectations of Studying vs Building<br />
<br />
Stepping back to see this, I find myself at a crossroads, deciding which road to choose in the future. Should I be a higher-level software engineer, concerned with small steps of enlightenment that are easily serialized into blog posts, or should I be a software hacker, who gains skills writing real software but has difficulty serializing gained knowledge in blog posts?<br />
<br />
Or am I drawing another false dichotomy? Probably so. So how is studying different from building, such that I feel like time can be wasted in one but not the other? The time spent building a program that can't be finished and can't be used - is is time wasted? It sure feels like wasted time when you don't reach your expectations and feel like a failure. Contrast this with pure study; When you study something, you are searching for answers to questions, rarely committed enough to feel failure. So perhaps this is the only difference between the two - expectations.<br />
<br />
This will be the conclusion from today's introspection. Either study or building is a valid way to grow, learn, and improve, but building may have the ability to cause deeper emotions.Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-45504116478296882772012-03-01T00:44:00.000-06:002012-05-19T15:21:52.827-05:00On Moving to Legacy Java Web App MaintenanceI work in a company with other developers. Whether we are on the same project or not, we are on the same team and we need to support each other. Since I started working at Sundog ~14 months ago, I've working solely with Salesforce, doing custom Force.com development. I've gotten pretty accustomed to the limits and boundaries of the platform, and I feel that I'm pretty good with it.<br />
<br />
A coworker has been maintaining a rather large legacy Java web application by himself for the last >2 years. It's been decided that its time to shift another developer into his position, and that person shall be me. He was getting pretty sick of being the lone developer on old code. As he has been setting up my environment and training me in over the last week, I can almost see the weights as they are lowered from his shoulders. He's a great software engineer who has been trapped by misfortune for a long time, so I'm happy to step in so he can move onto something else.<br />
<br />
As a company, we have been pretty poor about shifting teams. Some people grew so tired of being the only person who can something, and they have left the company because of it. These employees are called knowledge silos, and this is inherently a bad thing. If only one employee knows how to manage an important system, or only one employee can use a certain software tool, the company will be screwed when that employee leaves. It would take months before they can find another person with a similar skill-set, and even longer for that person to catch up to where his predecessor left off.<br />
<br />
So yes, I'll be using Java, and re-learning how to program since I haven't used it in a long time. I've learned how to set up a local web application server, OC4J, and how painful they are to boot, configure, and manage. I've learned how to set up and use a 'real, enterprise' IDE, IntelliJ, to edit, deploy, and debug Java projects. I still need to learn a whole new set of keyboard shortcuts and IDE quirks. I will continue to learn about OS-level frameworks, specifically, Struts and Spring, and their models of handling HTTP requests. I will study up on OOP design patterns, and I will have to review the advanced language features that are available to use in Java (I'm familiar with Salesforce's Apex, which is Java Lite).<br />
<br />
I have mixed feelings about this change in positions for me. I am happy to learn about different MVC frameworks and how a real debugger works (I haven't hardcore used one before). However, I am truly worried that I will be doing Java maintenance for the foreseeable future (RE: >1 year). I really don't want to be off by myself again, separated from the forward direction that the company is taking into cloud and mobile platforms. I am really worried that doing pure maintenance like this for a full year will destroy my passion and motivation for software engineering, design, and language theory. Here's to hoping I can keep my chin up. <span style="background-color: #fbfbfb; font-family: Verdana, Arial, Helvetica, sans-serif; font-size: 10px; line-height: 15px; text-align: -webkit-center;">( ̄ω ̄')</span>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com1tag:blogger.com,1999:blog-8410610130228839275.post-42096371154253432432012-02-22T19:24:00.004-06:002012-02-22T19:24:47.566-06:00Salesforce Painful Certification Practices - No Feedback<div>
This post is an open complaint towards Salesforce's certification process. This is quite a long post, and you may thing that it is a rant. Well, it very well may be a rant, but these are opinions that must be expressed, and I will gladly take the podium at this time because I haven't heard anyone else complain. I need to discuss a few of Salesforce's terrible certification practices: inconsistent terminology and poor feedback.</div>
<div>
<br /></div>
<div>
The first area of fault that I want to point out is Salesforce's inconsistent terminology. I'm sure you know what I mean if you've studied for or taken any certification exams. I've taken a few Salesforce certification multiple choice exams, the Developer Certification and the Advanced Developer Certification exam. Both of these multiple choice exams were filled with questions that were trickily worded and had choices that were equally as tricky worded. Rather than testing the candidates knowledge and understanding of concepts, these exams seemed to be testing the candidates ability to remember key Salesforce terminology. Salesforce often uses multiple words to describe the same thing, and uses the same words to describe other things as well, and on top of this, they are inconsistent with the usage of this terminology. To test candidates on this inconsistent terminology is a terrible practice.</div>
<div>
<br /></div>
<div>
To move onto the main complaint of this post, I want to discuss Salesforce's feedback on their certification exams. In short, there is none. If you fail an exam, you are not told how close you were to passing, nor were you told in what areas you did poorly. This is key information! It encourages more studying for the next time, and it provides a foothold for that candidate to use to attempt to pass the test next time. Without feedback, taking these tests is like running full speed into a wall. The wall either busts and you pass through, or you fall flat on your backside. You may start to feel insane after feeling the pain of failure three times in a row.</div>
<div>
<br /></div>
<div>
To continue on this point, I want to extend this complaint from written exams to hand-graded exams. If you fail a multiple choice exam, you will certainly remember some of the questions from the exam. Some may intrigue or confuse you and inspire you to research and remember them later. But consider the case of submitting a programming assignment for human review, or giving a presentation before a panel of judges, which are situations one will encounter when attempting Advanced Developer or Salesforce Architect certification. I've failed the Advanced Developer Certification programming assignment once before now, and they did provide feedback to me. However, this feedback seemed to have been computer-generated or mistaken, because when I read the feedback, I have no idea about what part of my code they are marking as a mistake. Moreover, their feedback does not help me to improve!</div>
<div>
<br /></div>
<div>
To provide specific details about the poor feedback Salesforce provides, I'll provide a few excerpts from my last programming assignment results.</div>
<div>
<br /></div>
<div>
"The design approach taken is suboptimal and does not demonstrate an understanding of triggers, order of execution, and platform design principles on the platform." - I'm sorry, but I believe that I am a very pragmatic programmer and that I am *very* familiar with the Salesforce platform. When I read this, I think back on much of the example code I read in Salesforce documentation, and how inefficient and poor they are. So these same Salesforce experts think that my code is very poor? I care a great deal about writing maintainable and efficient code! If they think my code is terrible, point out my mistakes! I would love to fix them and improve the code I write on your platform! Seriously! what code are they looking at?</div>
<div>
<br /></div>
<div>
"Areas for Improvement: Use of aggregate queries" - I am well aware of SOQL's aggregate query functionality, but I found not a single place in the application where it makes sense to use aggregate queries. If they told me that I as required to use an aggregate query, just to demonstrate my knowledge of them, instead of using a single query to get the records I need and looping over them, I could have easily added a second (precious) query just to use an aggregate. I didn't even consider this, however, since Salesforce's governor limits force developers to minimize the number of SOQL queries.</div>
<div>
<br /></div>
<div>
"Areas for Improvement: Conforming to governor limits" - Uh... what? What actionable item or lesson learned can I take away from this? Every single one of my methods were "bulkified", and accept only collections as parameters! What code are YOU looking at!</div>
<div>
<br /></div>
<div>
"Strengths: Developing scalable code to handle bulk operations" - What! You just told me that my code is inefficient and doesn't conform to governor limits! Now you tell me that my code is wonderfully scalable?</div>
<div>
<br /></div>
<div>
In this final paragraph, I want to give a voice to the feedback process of the Salesforce Architect certification. My co-worker, not I, took this exam, so I'm reporting second-hand information. He paid thousands of dollars and took time off of work to fly to San Francisco to give a presentation before a panel of judges. They gave him a book of backstory and requirements for integrating an external system with Salesforce and allowed ~60 minutes to architect his solution and prepare his solution for presentation. 60 minutes is hardly a practical amount of time to solidly architect a thing of that scale! To add insult to injury, when he received the results via email the following week, which informed him of his failure, they provided no explanation, justification, or areas of improvement. How can he prepare to take this exam again the next time? How can he justify spending thousands of dollars again and not be able to promise improved odds of success? It is quite hard to justify attempting this exam again, no matter how important it is.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
For consulting companies like my own, certifications are a necessary way to prove to new and existing clients that we legitimately understand the technology. Therefore, to improve our position relative to competitors, we strongly encouraging our employees to obtain as many Salesforce certifications as possible. I don't mind getting certified, because it makes me a more employable person, and I also personally want to be awesome at my profession. So, even though a certification process is incredible painful, people like me will be forced to continue banging our heads into walls.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com9tag:blogger.com,1999:blog-8410610130228839275.post-26933206284180249522012-02-10T08:14:00.001-06:002012-05-19T15:22:13.468-05:00How to Implement my Theoretical UpdateContentRequest<div>
Continuation from <a href="http://alexdberg.blogspot.com/2012/02/appeal-of-single-page-web-apps.html">previous post</a>. It was written on a caffeine high, so I thought I was saving the world, and this post would be the roadmap to salvation.<br />
<br />
Alright, problem: the user clicked the "next" button/link, now our Javascript app needs to load new content.<br />
Link the button to a Javascript function called UpdateContentRequest. What does this function do? We want this function to do everything necessary to update the page to show this new content. This requires a few steps.<br />
First, it has to request the new content from the server. This isn't too difficult if we use jQuery's snack function. A more flexible solution is to make the content available via a REST interface and use a Javascript REST framework. What do we do after we get it? We can either store it locally, either in a variable or in the HTML5 LocalStore. The other option is to immediately render it to the page without storing it long-term.<br />
<br />
This brings us to the next step. Once we have the data, how do we render it to the page easily and appropriately? This also shouldn't be too difficult if we use the right tools. It's easy to know where to place this new data using Javascript if we use an id attribute to mark the parent element of the content - call it "blogContentWrapper" or something. The other tool to use is a Javascript templating language. There are many of these out there, such as jQuery templates, handlebars, or mustache, so just pick your favorite. These tools allow you to write an HTML template with a few holes, then inject data into this template to dynamically produce the marked-up content. Just take this marked-up content and replace the current child of the "blogContentWrapper" with it.<br />
<br />
Wow, this sounds so easy. The hard part is to make this library flexible so it can be used for a number of types of websites while keeping it easy to use and powerful. I'll need to consider the use cases for this library and then reconsider the level of abstraction to use. Also, Google supports crawling Ajax websites like this, so I'll need to consider their requirements to keep this library compatible.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-12209976193923476182012-02-10T08:13:00.000-06:002012-05-19T15:22:24.690-05:00The Appeal of Single Page Web Apps<div>
Single page web apps. I am very happy when I visit a site that subscribes to this philosophy because they at very responsive. A philosophy? Yes, I think it is a philosophy. Can we call it a philosophy if its followers advocate it as a way to avoid wastefulness? While single page web sites are a wonderful solution when creating a few types of web sites, it isn't the best choice for others types. Ignore that while we discuss the wastefulness that single page web sites solve.<br />
<br />
Website wastefulness? What waste, exactly? What I am thinking about is the site's entire HTML markup, Javascript, and CSS styling must be resent to the browser for each request. Sure, client-side browser caching may help reduce the wasteful resource requests here, but we shouldn't depend on the client to optimize this when the website developer has the power to optimize the user's experience.<br />
<br />
How can the website developer optimize this? Make the web site a Javascript app! Send this Javascript app on the initial page load, then delegate each subsequent page request to the Javascript app. How will the Javascript app do this? It will request just the new content from the server and place it on the page. Using this method, we are only grabbing the new content that the user wants, not all the resources and markup for the page. Efficient! And fast!<br />
<br />
I haven't figured out how to architect this Javascript framework, but I'm sure it would be worth the time investment. Maybe there is an existing framework that does most of this, or even some of it.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-81388585154889406182012-01-28T17:14:00.003-06:002012-01-28T17:56:51.609-06:00Two Months of Chinese Language - Stories of Its Usefulness<br />
In my <a href="http://alexdberg.blogspot.com/2012/01/priority-change-chinese-studies.html">last post</a>, I explained my recent shift in priorities, which resulted in one of my hobbies, Chinese language learning, moving to number one position. I spent ~2 months studying Chinese in preparation for my vacation to Taiwan by myself, to test myself by seeing how much of a language one can learn in two months.<br />
<br />
How did I score on my self-test? Not as good as I hoped, but still successful. While I was only able to understand a few words of each sentence, I wasn't able to grasp much meaning. I *was* successful in communicating on a few occasions. I was city-walking, trying to find the hiking path to the top of a large hill. I stopped a guy walking on the street with a "dui bu qi" (excuse me) and explained that "wo xiang qu zhe li" (I want to go here) and pointed to my map. Success! He spoke some fast Chinese that I didn't understand, but he also used hand motions! Straight ahead and left! Alright! I found the mountain, but got couldn't find the hiking trail to go up.<br />
<br />
I saw another bored-looking guy, so I asked him "wo xiang qu shang" (I want to go up). I hoped my language didn't sound like a caveman with such simple sentences. I guess my tones were right, because he didn't look offended as if I had insulted his mother. He also used hand motions! Success!<br />
<br />
I spent some time in Japan, and made a good friend who is from central Taiwan. I took the opportunity to meet up with her again, and I spent a few days with her family. Her family was very welcoming toward me. They had a car! This was so nice to see after city-walking in Taipei for 5 days. They took me to a few of their favorite restaurants. Real chinese food is not street food? Trip-changing experience! Home-made food is what? Noodles, rice, and veggies! So interesting! They drove me to a mountain where some of the best Oolong tea is made. How educational! I didn't know Oolong tea could be so delicious! And I didn't know what tea fields look like. Rows of bushes on hillsides in the clouds! Beautiful, educational!<br />
<br />
I felt so bad about not being able to make fulfilling meal-time conversation. I wish I could have told them what my life was like, and what I found interesting about their lives. I wish I could have thanked them in better Chinese. I hope my gestures and thoughts of thanks were picked up by their sense of empathy. If I say "xie xie" (thank you) five times in a row, does that properly mean "Thanks! I owe you so much, and your home and family is so awesome! I had so much fun!"? I sure hope they got the message.<br />
<br />
The one part of the language that I totally failed at? Ordering food. At most of the restaurants I visited in Taiwan, there are no pictures of food you can use to decide what to order. There's a sheet of paper with a grid on it. One column of the grid is filled with Chinese words for foods. The other column is for you to place checkmarks. This is broken up into categories. So, if McDonald's used this concept (they totally should), to order a burger, you need to take a sheet of paper, and put a checkmark next to 'hamburger', 'cheese', 'pickles', 'bacon', and 'lettuce', and don't forget to put a checkmark next to 'fries'. This is a very efficient way to order, I think, but if you can read *none* of the words, you just put checkmarks next random words that you like - Maybe a word has a simple letter or it's one that you recognize. More than once, I was surprised by what I got, and I still have no idea how to order it again. I need to learn more food words next time I travel to Taiwan or China.Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-84607136592694087512012-01-28T17:14:00.002-06:002012-01-28T17:15:01.702-06:00Priority Change - Chinese Studies PromotedMost of 2011 has seen me diligently studying the art of software development. It's a very deep topic that could keep me occupied for the rest of my life. I'm lucky to be able to work and stay interested in such a deep discipline. I've developed a few other interested in the second half of 2011, one of which is the Chinese language. A recent development has caused me to push my Chinese studies up to number one, ahead of software studies. This means that I won't be blogging about software for awhile. :( This kind of saddens me because I enjoy software, and investing time in it will help me out in my career as well. But, as with investments, it is smart to diversify. If the software industry dries up (can't imagine why) or my life changes drastically and I lose interest in it, my trump card will be useless. So onwards to investing time in hobbies.<br />
<br />
Why Chinese? Why *not* Chinese? I've learned that you shouldn't have to justify your interests; it is an indescribable force that captures ones interests and it should be trusted.<br />
<br />
Well, maybe I can find a small influence for the development of this interest. I had been building up vacation days, so I had to start thinking about what to do with them. While it would be nice in the short term, spending a week in a tropical paradise didn't suit me. I needed something that I could explore and learn about. After much thought, I decided on two ways to use my vacation time most effectively: a) Travel to a fun city that also has a software conference to attend, or b) Learn a new language and travel to a place that speaks it as a test for myself.<br />
<br />
Which did I choose? Well, if I can decide on a good conference to go to, my company would pay for me to attend it - no need to spend my vacation time. So I decided to test myself. I bought a round trip ticket to Taiwan, scheduled for 2 months in the future, and attempted to learn as much Chinese as I could in two months.<br />
<br />
How much Chinese did I learn before departing? I was pretty motivated during those two months. I may write another blog post detailing my strategy, which proved to be pretty effect, but I can summarize it here. I listened to many hours of basic Chinese phrases in situations. I had to listen to each lesson ~5 times before I was able to pick up any words. Separately, I started doing flashcards. It is pretty easy to find flashcards for all the basic Chinese, such as 'Thank you", "Goodbye", "This is delicious", and "Where's the bathroom". I tried to learn ~20 new words each day (probably more in reality). I think I had completed a deck of 500 words before leaving, and crammed another 200 on the flight to Taiwan (it was a long flight).<br />
<br />
Read part two of this post <a href="http://alexdberg.blogspot.com/2012/01/two-months-of-chinese-language-stories.html">here</a>, where I tell stories about the few times that my Chinese studies paid off.<br />
<br />Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-59365623773219655232012-01-14T15:40:00.002-06:002012-01-14T15:40:47.217-06:002011 Year-in-review and Personal Progress2011 has ended and 2012 begins. I sure hope I'm not the same person that I was 1 year ago. So how have I changed? Where have I improved? What new skill have I learned? What is my living situation and happiness level?<br />
<div>
<br /></div>
<div>
I've learned a lot in the last year. I'd like to document a few of the things here, so I can fondly look on it at a later time.<br />
<br />
Separating presentation from data on a web page.<br />
I spent a few weeks this year lightly researching client-side web applications, even creating a prototype web application that is highly responsive using Backbone.js. This is one part of software that I thought I would never understand because of Javascript, Ajax, and talking across the network, so I'm pretty proud of this one.<br />
<br />
MVC and other presentation patterns.<br />
While I'm still not an expert in this area, I think my knowledge is now greater than a large percentage of my peers. I don't have a single piece of software to demonstrate my new-found knowledge, which is true for much of what I learn, this knowledge will make itself evident in other software I write moving forward. Varieties of MVC are present in most areas of software engineering now, iOS, Android, web pages, and desktop applications, they all use a variety of MVC to keep their applications clean.<br />
<br />
Salesforce development.<br />
Knowledge of writing applications for Salesforce is not directly transferable to other disciplines, it still allowed me to learn and practice methods of controlling complexity, updating legacy (to me) code, and querying information from and persisting information to a database. Also, while in Salesforce land, I was able to practice object-oriented programming, which was a roller-coaster ride of "Yes, I totally get it!" and "I understand nothing!" feelings.<br />
<br />
Version control systems.<br />
I did quite a bit of research and reading on various version control systems. I researched not only the version control software tools themselves, but also the software project management practices that release processes that rely on these software tools. It's a complex topic that spans technical areas, human management, and gray-area decisions. While I learned about the comprehensive basics, I feel that there is still more practical stuff to learn.<br />
<br />
Professional recognition.<br />
I was invited to join the experts program, which places me as a bust on the masthead of my company's ship. I write well-crafted blog entries a few times a month, which give me an outlet for the technical thoughts in my head. I also set up a tech talk at work, for which I created a slideshow and practiced a talk that introduces the technical basics of Git. I presented the tool to the entire software team in a factual manner, and intend to give a follow-up presentation to discuss the release-management styles that are used with distributed version control systems. I am making it my goal to give more tech talks this year - a solid topic once per month.<br />
<br />
Non-work related hobbies.<br />
Besides work, I've decided to learn Mandarin Chinese. It's a challenging language, but I am enjoying studying it. It's far easier than Japanese grammatically, but I think that proper pronunciation will elude me for quite some time. It hasn't been my number one priority, which is software, so it only received ~10 hours per week, which I worry is not enough. I took a vacation to Taiwan in September to test my knowledge. I found that I can speak very little after just 2 months of studying, but it was enough for me to ask "Which way?" questions and "Please help me." phrases. I have learned a number of words now, enough so that I may be able to start chatting with Mandarin-speakers on the internet to learn more. Conversation flow and phrases is pretty difficult.<br />
<br />
I hope 2012 brings me as far as 2011 brought me. As long as I keep improving and I'm recognized for it, I'll be happy working with my current company.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-32293470573147355562012-01-14T13:14:00.000-06:002012-05-19T15:23:14.958-05:00Alex Reads - MS Research - Cohesive and Isolated Development with BranchesPost-read thoughts -<br />
This paper seems like it was written by amateurs. Note that I am not a member of the academic community, nor do I write academic papers, so this is more of a comment on their writing style and their ability to defeat my BS filter (i.e. Can you prove that? How exactly do you define 'x'?).<br />
Having said that, there are some useful ideas and interesting results from their interviews and research with real projects. Here's what I found interesting:<br />
<ul>
<li>Studies show that branch usage greatly increases with new adoptees of DVC.</li>
<ul>
<li>Pre-DVC, 1.54 branches/month. With-DVC, 3.67 branches/month (though I worry about methods used to obtain this info)</li>
<li>The idea that prior to DVC, branches were created only for releases, not new features.</li>
<li>To effectively use DVC branches, create one for each new feature, localized bug fix, or maintenance effort.</li>
</ul>
<li>Studies show that even with DVC, a central repo is still used. (It is important to admit this, IMO)</li>
<ul>
<li>An accessible DVC repo enables anyone to contribute to the project. Developers without commit privileges were reduced to working w/o VC. Accepting changes from unofficial project members has high barriers.</li>
<li>Academics advise us to checkpoint code at frequent intervals in a place separate from the 'team repo'. Only tested and stable code should be integrated into the 'team repo'. DVC systems enable and encourage this practice.</li>
</ul>
<li>The term "Semantic conflict" - All VC systems are good at syntactic conflicts, but not semantic conflicts.
</li>
<li>Awareness of
'Distract commits', which are commits that are required to resolve merge conflicts.</li>
</ul>
<br />
<br />
<br />
Link to Microsoft Research paper -<br />
Introduction web page - <a href="http://research.microsoft.com/apps/pubs/default.aspx?id=157290">http://research.microsoft.com/apps/pubs/default.aspx?id=157290</a>
<br />
Research paper [PDF] - <a href="http://research.microsoft.com/pubs/157290/paper.pdf">http://research.microsoft.com/pubs/157290/paper.pdf</a>
<br />
<br />
<br />
<blockquote class="tr_bq">
<b>Abstract.</b> The adoption of distributed version control (DVC), such as Git and<br />
Mercurial, in open-source software (OSS) projects has been explosive. Why is<br />
this and how are projects using DVC? This new generation of version control supports two important new features: distributed repositories, and history-preserving<br />
branching and merging where branching is easier, faster, and more accurately<br />
recorded. We observe that the vast majority of projects using DVC continue to<br />
use a centralized model of code sharing, while using branching much more extensively than when using CVC. In this study, we examine how branches are<br />
used by over sixty projects adopting DVC in an effort to understand and evaluate<br />
how branches are used and what benefits they provide. Through interviews with<br />
lead developers in OSS projects and a quantitative analysis of mined data from<br />
development histories, we find that projects that have made the transition are<br />
using observable branches more heavily to enable natural collaborative processes:<br />
history-preserving branching allow developers to collaborate on tasks in highly<br />
cohesive branches, while enjoying reduced interference from developers working<br />
on other tasks, even if those tasks are strongly coupled to theirs</blockquote>
<br />
<br />
<br />
Introduction<br />
<ol>
<li>Purpose of Version Control</li>
<ol>
<li>Create isolated workspace from a particular state of the source code.</li>
<li>Can work within one branch without impacting other developers</li>
</ol>
<li>Purpose of branches</li>
<ol>
<li>Should be 'cohesive' so that a team can work together on a branch</li>
<li>Keeps new features separate, and allows merging features when complete</li>
</ol>
<li>Evolution of VC systems</li>
<ol>
<li>Marked by 'increasing fidelity of the histories they record'</li>
<li>1st gen - record individual file changes - can roll back individual files (RCS)</li>
<li>2nd gen - record sets of file changes (transactions) that can be rolled back (CVS)</li>
<li>3rd gen - records history of files even through branching and merging (DVC)</li>
</ol>
<li>DVS features</li>
<ol>
<li>Every copy of a project is a complete repository, complete with history</li>
<li>Can change source code changes with other peer repositories</li>
<li>Preserves history through branches and merges</li>
<ol>
<li>Each child commit tracks its parent commits - across branches and merges</li>
<li>Allows us to quantitatively study of branch cohesion and isolation</li>
<li>Allows us to study relationship in branch usage with defect rates and schedules delays</li>
</ol>
</ol>
<li>Why has DVC become so popular?</li>
<ol>
<li>Developers wanted to use branches, but experienced "merge pain" with CVS</li>
<ol>
<li>Studies show that branch usage greatly increases with new adoptees of DVC</li>
<li>Studies show that even with DVC, a central repo is still used</li>
<li>Can observe that branched history can be linearized into a single 'mainline' branch</li>
</ol>
</ol>
<li>RQ2 is "How cohesive are branches?"</li>
<ol>
<li>'Cohesivity' is measured by directory distance of files modified in a branch (wha?)</li>
<li>Compare branch cohesion in Linux history against trunk branch cohesion</li>
<li>If branches are not more cohesive, then either a) trunk is more cohesive or b) directory distance is not a good measurement for 'cohesivity' (lol)</li>
<li>Results - branches are far more cohesive than background commit sequences (background?)</li>
</ol>
<li>RQ3 is "How successfully do DVC branches isolate developers?"</li>
<ol>
<li>VC is good about flagging syntactic changes between branch-time and merge-time</li>
<li>VC is not good about flagging semantic changes between branch-time and merge-time</li>
<ol>
<li>Semantic = assumptions made during development (so, API/method changes?)</li>
<li>Branch coupling causes semantic conflict</li>
</ol>
<li>Semantic conflict is number of files in branch that was also modified in trunk since fork</li>
<li>Measure how often a semantic conflict would interrupt a developer if using no branching</li>
</ol>
<li>Paper proves three things</li>
<ol>
<li>Prove that branching, not distribution, has driven popularity in DVC</li>
<li>Define two new measures, branch cohesion and distracted commits</li>
<ol>
<li>'Distract commit' are new commits required to resolve merge conflicts</li>
</ol>
<li>Show that branches are used to undertake cohesive development tasks</li>
<li>Show that branches effectively protect developers from concurrent development interruptions</li>
</ol>
</ol>
<div>
Theory</div>
<div>
<ol>
<li>History</li>
<ol>
<li>Git and Mercurial basic history - birth, growth, majority use in Debian</li>
<li>Adopting new VC is very difficult - citing experiences by Gnome, KDE, and Python</li>
</ol>
<li>RQ1 "Why did projects rapidly adopt DVC?"</li>
<ol>
<li>Interviews show that main reason is to use branches for better cohesion and isolation</li>
<li>Exactly how cohesive are branches? How well do they isolate feature teams?</li>
<li>If developers use branches to isolate tasks, branches will be cohesive. On the other hand, developers could use branches merely to isolate personal development work, without separating work into tasks</li>
</ol>
<li>RQ2 "How cohesive are branches?"</li>
<ol>
<li>Coupling and Interruption</li>
<ol>
<li>Should checkpoint code at frequent intervals separate from 'team repo' - only tested and stable code should be integrated into 'team repo'</li>
<li>When ready, integration must not be difficult or gains of personal branch is lost</li>
<li>When not using branches, changes are not proven stable, require integration work</li>
<li>Studies show that resuming from interruption takes at least 15 minutes</li>
</ol>
</ol>
<li>RQ3 "To what extent do branches protect developers from integration interruptions caused by concurrent work in other branches?"</li>
</ol>
<div>
<br /></div>
Methodology</div>
<div>
<ol>
<li>Began with interviews to developer hypothesis regarding motivations for adoption</li>
<li>Empirically evaluating by performing statistical analysis</li>
<li>Semi-structured interviews (sounds like high probability for introduction of non-scientific bias)</li>
</ol>
<div>
<br /></div>
Evaluation</div>
<div>
<ol>
<li>Description of linearizing a branched DVC history</li>
<ol>
<li>Project concurrent sequence of changes onto single timeline</li>
<li>Commits on this timeline represent changes 'across' branches</li>
</ol>
<li>Rapid DVC adoption</li>
<ol>
<li>Observe that, contrary to common knowledge, most DVC projects do not make use of distribution</li>
<ol>
<li>Of 60 projects, all but Linux use centralized model around single public repo</li>
<ol>
<li>(this doesn't make sense. I think their understanding of 'distributed' is off)</li>
</ol>
</ol>
<li>Some branches that grew too different from trunk had to be abandoned</li>
<li>Prior to DVC, branches were created only for releases, not new features</li>
<li>Pre-DVC, 1.54 branches/month. With-DVC, 3.67 branches/month</li>
<li>Developers without commit privileges were reduced to working w/o VC</li>
<ol>
<li>Accepting changes from unknown devs required huge patch sets</li>
<ol>
<li>Could not add incremental work</li>
<li>Sometimes included unrelated changes</li>
</ol>
</ol>
<li>Therefore, main motivation is branching, not distribution (define "distribution"?)</li>
</ol>
<li>Cohesion</li>
<ol>
<li>Large systems structure their files in a modular manner - related files are located nearby (I question this premise)</li>
<li>[Science! Graphs are shown, descriptions and explanations are given]</li>
<li>Results show that branches are relatively cohesive.</li>
<ol>
<li>Interviews are consistent - branches are created for more than releases (low standard)</li>
<li>DVC branches comprise features, localized bug fixes, and maintenance efforts</li>
<li>Three interviewees indicate that non-trivial changes would have been created offline and then commited in a single mega-commit</li>
</ol>
</ol>
<li>Coupling and Interruptions</li>
<ol>
<li>[Hardcore science! Too difficult to understand. Questionably scientific pictures]</li>
<ol>
<li>Trying to identify and quantify 'semantic conflicts'</li>
</ol>
<li>Some disclaimer that git allows 'hidden' history in unpublished commits, hidden by rebasing</li>
</ol>
</ol>
<div>
<br /></div>
<div>
Related Work</div>
</div>
<div>
<ol>
<li>This paper's main concern is to study history-preserving branching and merging</li>
<ol>
<li>Some people advocate even finer grained history retention</li>
<li>Some people advocate automating information acquisition, such as static relationships</li>
</ol>
<li>Some people recommend patterns to use for workflows that effectively use branching</li>
<ol>
<li>Other people advocate workflows that mitigate branching/merging issues</li>
</ol>
<li>Somebody proposes current tools and project management is inadequate</li>
</ol>
</div>
<div>
<br /></div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-54149770471808660472012-01-02T12:37:00.004-06:002012-05-19T15:23:30.300-05:00Bit of MVC History and Thoughts on the Proliferation of Competing MVC FlavorsWhile I've been exploring various implementations of presentation patterns/frameworks in Javascript, I've started questioning the MVC (Model-View-Controller) pattern as a whole. What problems does it solve? How is it different competing presentation pattern ideas, such as MVP and MVVM? I'll use this blog post as a way to organize my findings and thoughts. I'll give a bit of history first, then give a bit of speculation at the end.<br />
<br />
<div>
MVC is an architectural pattern, the purpose of which is mainly code organization and separation of concerns. It was conceived a long time ago (1979) by <a href="http://en.wikipedia.org/wiki/Trygve_Reenskaug">Trygve Reenskaug</a>. He was a member of the Smalltalk community in the early days of GUI design, and took part in the early conversations of various patterns for organizing code when creating solutions for handling user input in a GUI context. He authored his first paper on MVC, titled <a href="http://www.scribd.com/doc/6414921/Original-MVC-Pattern-Trygve-Reenskaug-1979">THING-MODEL-VIEW-EDITOR</a>, which details one such pattern. The community later distilled these terms, explained <a href="http://heim.ifi.uio.no/~trygver/themes/mvc/mvc-index.html">here</a>, to become model, view, and controller, as defined in <a href="http://heim.ifi.uio.no/~trygver/2007/MVC_Originals.pdf">this revised paper</a>.</div>
<div>
<br /></div>
<div>
It is important to note, however, that this architectural pattern was conceived before complex internet pages and internet applications were possible. Rather, this first conception of MVC was a GUI solution within the problem domain of desktop applications. I believe this style of MVC, which uses multiple layered views, is used on OS-level platforms, such as and now seems incongruent as a pattern for web app servers and page generation. Internet pages and internet applications have a much different set of limitations than desktop programs - the most notable of which include the stateless nature of HTTP and the added cost of sending data back and forth across the wire between the client and the server.<br />
<br />
Because of the popularity of the MVC pattern, it was used as the pattern for delivering web pages in the internet age. Because of the differences in the problem domain, however, the pattern evolved to fit the new problem domain. It is possible that this general incompatibility was one of the central reasons for the many MVC spin-offs that have been conceived since then, though it is just pure speculation. An equally qualified reason would be that people started using MVC without fulling understanding the reasons behind the existing MVC, or without knowing that an existing MVC existed.<br />
<br />
I wonder about the reader's thoughts. Do you think the reason for the proliferation of varying ideas of MVC is that the domain changed to the web? Or is it because people started using it while having a poor understanding of its reasoning and concepts? Is this a waste of the brain's processing time? Maybe, but I enjoy it.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-7448466558552493942011-12-27T17:25:00.000-06:002012-05-19T15:23:40.632-05:00Backbone Requires a Main View for the App?<div>
Here's a continuation of time spent considering my previous problem: Backbone.js and its view-centric MVC architecture.<br />
<br />
Would we really want a model to be able to render itself to the page? Considering the information it has, I don't think we want it to. First of all, how does it know where in the DOM to draw itself? I suppose we could give it a specific DOM element as an attribute when create the model, then it would know where to draw itself. But, what if this model is in a collection, such as a list? How does it know where in the list to render? We could probably wrestle through this problem, but the code would get mangled somewhat, and each model instance would get pretty heavy.<br />
<br />
It seems, then, that this task of rendering a model is best performed by either the model's collection or its view. In other MVC frameworks, a 'controller' is used to place the model's information in the view's DOM. In backbone, this is the responsibility of the 'View' object. Why is it called View instead of Controller? I believe it was named this way because it is meant to 'resolve to' HTML code. Not a name that suits the MVC style, but I can see the logic.<br />
<br />
A Backbone app, then, requires a main view (controller) that spawns views and models, using these views and models purely for code organization's sake, delegating to them solely the task of data integrity. I started hacking at my backbone app thinking that the main view's (controller's) job is just to initialize everything, but this is not the case. Instead, it is in charge of creating and destroying the various Views and Models in your app and associating models with views.<br />
(Still not sure of the recommended relationships between views and models. I'll have to keep reading stackoverflow's backbone tag responses to learn more.)<br />
<br />
I'm starting to think that this framework is overkill for what I'm trying to do. Something like <a href="http://knockoutjs.com/">knockout.js</a> seems like a smarter choice. I've heard the developer say that it is great for creating "JSON editors". That is, request JSON from the server, then bind its attributes to fields for easy editing. That is also on my list of libraries to try.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-1652657870679948532011-12-24T10:09:00.001-06:002012-05-19T15:23:50.659-05:00What SHOULD MVC be for Javascript? Towards other JS MVC ideas.<div>
With all the trouble I've been having with Backbone.js lately, I've decided to take a step back and look at my attempts from a higher level. Using a framework shouldn't be this difficult! Why, then, am I having so much difficulty? I've decided that it is because my idea of "Javascript's version of MVC" is misbased. Therefore, I've decided to start looking at the whole spectrum of Javascript frameworks - Backbone, Spine, Knockout, and JavascriptMVC (did I miss any other major ones?).<br />
<br />
Addy Osmani (not quite sure why he's famous, but he's skilled at explaining these frameworks from a high-level) recommends JavascriptMVC, because it is the "most comprehensive" and mature framework at the moment. I hope the community is friendly, because I've decided to choose this as the next stepping stone in my Javascript MVC studies. I hope the people that created it were educated enough to use intuitive object names and abstractions, as I'll need them to improve my understanding.<br />
<br />
When reading <a href="http://jupiterjs.com/news/organizing-a-jquery-application">this introduction</a> to JavascriptMVC, I've been inspired to return to the idea of an application's "state" once again. The author asserts that "The secret to building large apps is NEVER build large apps. Break up your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application." A agree completely with this! But, how can one make bite-sized Javascript pieces? Inevitably, these pieces <i>must</i> talk to each other, so how can we do this?<br />
<br />
How about this idea for communicating between modules: Use the page's <i>state</i> variables to communicate between these pieces in a pub-sub manner. This manner is very much like how disparate computer systems communicate across the internet. The internet is probably the most modular system I am aware of, and if modularity is what you want, then using the internet's model is probably best.<br />
<br />
So, the page's state. Would having a set of variables on your web page limit the scalability of your site? As long as you aren't transferring the state back to the server, you should be fine. As long as the web app talks back to your servers through a stateless API, you should be fine.<br />
<br />
Great, so I can't think of any reason why we can't have a few state variables on our client-side page, so I'll try to use the state to facilitate communication between Javascript components. (If it is easy to do with JavascriptMVC that is.)</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-39384926682314972872011-12-18T11:49:00.000-06:002012-05-19T15:24:01.305-05:00Backbone's Perverted Architecture and MVCI'm still working on getting my Backbone.js project to work. So far, I've got a text box and a list that would update itself. Not too impressive. Why am I not successful? Three reasons: (1) I've never worked in Javascript before, (2) I haven't used this framework before, and (3) I haven't used this perverted style of MVC development before.<br />
<div>
<br /></div>
<div>
Perverted style of MVC? Yes, Backbone doesn't tell you how to use its objects, it feels like it is saying</div>
<div>
"Here's 3-4 Javascript objects that we like. They work great together! They have built-in behavior, which makes it awesome! However, there's a lot that ISN'T built-in, and we leave that to you to figure out. But this is what makes Backbone flexible! Great, isn't it?"</div>
<div>
<br /></div>
<div>
Some questions I find myself asking: How should I structure my JS app? View-centric, model-centric, or collection-centric? I would like to just manage the Javascript side of the app - just worrying about the collections or models. If I add a new model to a collection, it should automatically render the change collection to the page. I have found no examples for a model-centric app like this. I find myself with a machete in a jungle, hacking my through the objects and their relationships, trying to find the path that the Backbone designer has left for me to find.<br />
<br />
Looking through the Stackoverflow questions and answers, it seems that it is possible to create view-only apps with Backbone. This confirms my suspicion that I've been using Backbone in a backwards way, trying to create my models first. One thing I learned today is that the idea of a Backbone View is that it 'resolves' to an HTML tree that can be applied to the page's DOM. That is, your master 'AppView' is meant to create sub-views, then request their HTML rendering to apply onto the page. Frick! This makes no sense! So the view can't actually render itself on the page? It requires a parent view to place it in the correct place? No! Maybe if I understood the capabilities of Javascript a bit better, I could make Backbone do it this way, but I was not successful.<br />
<br />
Next time I try to fix this app, I'll take the view-centric approach. I'll have my master AppView spawn child views, then pass them a model to hold its data. It's a very silly way of approaching the problem, but I'll do it. I'll do it just to see if backbone actually works with Visualforce. Then, I'll look into optimizing the architecture, unless I move onto a better JS MVC framework.<br />
<br />
Since I've been taking this 'backwards' model-centric approach, I'll change gears when I stab at this project next. That will work, for sure. Then, I'll try using Spine.js, which looks like it has a much more sane architecture. Also, Spine is appealing because client interactions are completely decoupled from the server, which sounds like the ideal remedy for the slow response times between Visualforce pages and their Salesforce servers.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com1tag:blogger.com,1999:blog-8410610130228839275.post-38627373126378973892011-12-15T21:39:00.001-06:002012-05-19T15:24:11.661-05:00Initial Backbone.js on Force.com Issues<div>
I spent a fair amount of time the last day or two working on creating a Visualforce page that uses backbone.js as an intermediary for communicating with the Apex controller via Ajax calls. I am still doubtful whether this framework will prove to be a good way to interface with Salesforce objects and data, but I won't know until I finish. <br />
<br />
Now, much of the time I've spent so far has been to connect the several backbone.js objects. They are: the model, the view, the controller, the HTML templates, and their duties to one another.<br />
<br />
While backbone.js is a framework that follows the MVC pattern, backbone's view serves a different purpose than the view in other frameworks, such as Salesforce's own MVC framework of Visualforce and Apex. The purpose of Backbone's view is to bind to an HTML element, and, when appropriate, render data from the model into this element using an HTML template.<br />
<br />
Did you also notice the interesting bit here? There are actually two views in play here! The HTML is the true (or truer) view, while the backbone.js view supplies the HTML view with logic, acting almost like a controller.<br />
So if the view is a controller in backbone.js, then what is the controller? Well, in backbone.js, the C represents collection, which is a "smart list" that holds backbone models. Likewise, backbone models are simply Javascript objects, nothing more!<br />
<br />
Well, that's a slight simplification, since each of these backbone objects has a bit more responsibility than I explained above. I'll explain the jobs/responsibilities of each of these backbone.js objects in my next post. For now, I want to discuss a few of the issues I've encountered.<br />
<br />
The first problem I was having was with jQuery. It is usually necessary to use jQuery in no-conflict mode, and to alias it to a variable that is not '$', such as '$j'. Having said that, Backbone has two other Javascript libraries as dependencies, Underscore.js and jQuery, and uses them extensively under the covers. I had some problems with using no-conflict mode here, but this was solved with a few well-known tricks. Here is one way to solve the problem. This great <a href="http://lostechies.com/derickbailey/2011/11/09/backbone-js-object-literals-views-events-jquery-and-el/">blog post</a> has some other solutions.<br />
<br />
j$ = jQuery.noConflict();<br />
j$(document).ready(function($){ /* All Backbone code goes here, called after onReady is fired. */ });<br />
<br />
<br />
The other problem is that Backbone.js is built to use a RESTful way of GETting data from and POSTing data to the server. So naturally, I started writing a @RestResource class that will respond to these calls to retrieve data and update data in the database. This is one place I started to encounter problems - these REST handlers are located at na4.salesforce.com/services/apexrest/, which is a cross-site-request from the Visualforce page at c.na4.visual.force.com/apex/. Still not positive this is the problem, but I'm pretty sure. To resolve this, I will attempt to handle Backbone's sync calls with Javascript Remoting.<br />
<div>
<br /></div>
</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com6tag:blogger.com,1999:blog-8410610130228839275.post-75471695068427445372011-12-15T08:32:00.000-06:002012-05-19T15:24:23.528-05:00Backbone.js and AUI on Force.com - Good Idea?Web development is an art. It is easy to make a web page, but very difficult to make a beautiful one. It is easy to make a static page, but very difficult to make a dynamic one. It is easy to make a site with multiple pages, but very difficult to make the content intuitive and interesting to navigate. A page I make may be perfect, but a friend may look at it and dislike it, or more ideally, be able to offer constructive criticisms about it.<br />
<div>
<br /></div>
<div>
Because web development it is such an art form, it matures over the years. In the early days, there were single page sites. Then came multiple page sites, followed by interactive/dynamic sites, and as they matured and became more user- and data-centric, became full-fledged web apps. All these changes were enabled by a maturing culture among web developers, new technology and languages, and a shifting market for consumers of the web sites and web applications.</div>
<div>
<br /></div>
<div>
Some web apps are data-centric, such as most apps on the platform with which I work, Force.com. What I dislike about the platform, however, is the length of time it takes for requests to return from the server and for the page to refresh. When I came across the idea of the <a href="http://alexmaccaw.co.uk/posts/async_ui">Asynchronous User Interfaces</a> for web sites, I fell in love with the idea. This is how a web site UI *should* feel. It feels fast like a native application while still allowing me to dynamically manipulate and synchronize data with the backend.</div>
<div>
<br /></div>
<div>
So why, then, are more people not making AUI web sites? I can understand if it was much more difficult, but with the right abstractions, it seems to be not too difficult at all. Choosing a simple library, such as <a href="http://documentcloud.github.com/backbone/">backbone.js</a>, <a href="http://spinejs.com/">spine.js</a>, or <a href="http://knockoutjs.com/">knockout.js</a> can make development on one of these one-page-sites much easier.</div>
<div>
<br /></div>
<div>
I want to give a high-level overview about how a tool like backbone.js could be used on top of the Force.com platform. Let's take a look at the building blocks we have. Force.com is the server-side database. The methods on the Apex controller is the intermediary between the client-side web page and the database. Then we have the client-side web page, which is HTML/CSS for the display and Javascript for the client-side logic.</div>
<div>
<br /></div>
<div>
What I want is an HTML table that is a view into a Javascript object collection. When I modify elements of the HTML table, I want the modifications to be directly modifying the Javascript object collection that is bound to it. Then, I want it to save to the Force.com database, either at random, appropriate times, or on-demand by clicking a save button. Let's see if we can make this happen.</div>
<div>
<br /></div>
<div>
I've have just a little experience with Javascript and web development, so I'm not sure if this is possible or a good idea, or even if it will be better than what exists. I'll find out soon enough.</div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-56719851951441008072011-12-11T10:53:00.001-06:002011-12-15T08:37:04.563-06:00Changing Object Functionality in OOP Languages<div>
A while ago, I had a conversation with a coworker about the maintainability of code and objects aspects of software craftsmanship. Suppose I make a method and think it's perfect. Inevitably, this method will have to change its functionality, probably due to change in requirements. In cases like this, how should one approach this refactoring? I just came across the concept of the "open-closed" principle, conceived way back in 1988. <br />
en.m.wikipedia.org/wiki/Open/closed_principle<br />
I originally was influenced by the idea of immutable data structures, mostly due to listening to Rich Hickey praising them in his talks on Clojure and it's concepts. Immutable data structures are interesting because they can be shared among different methods and threads safely. Immutability helps remove any surprises caused by method/function side effects. Though I haven't run into this problem before, it's a bit very attractive concept because it encourages simplicity in reading and maintaining existing code. Simplicity is king in the land of software development.<br />
The open-closed principle, on the other hand, promotes simplicity in method/API design by promising to not change the API and its implementation. This reduces the chance of creating bugs in the callers of your methods. If the implementation of a unit of code must change, the open-closed principle instructs us to subclass and override the unit of code we wish to change. This sounds great and makes our code immutable, in a sense, but it seems to be forgetting about about another core principle of programming, which is the wonderful typing system.<br />
This improvement/customization to the open-closed principle occurred in the 1990s, according to wikipedia. The revised version of the principle instructs us to use interfaces or abstract classes and create new implementations of them, rather than directly subclassing the class to change. This is a much better solution, as interfaces are a more precise manner of expressing functionality and purpose of an object.<br />
It may be a challenge, and but we should remember to use the tools that our languages provide us. We should remember to use interfaces where appropriate. </div>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-65291010506645795702011-12-06T09:10:00.001-06:002012-05-19T15:24:36.315-05:00The Base of Maintainable Programming Languages - Libraries over Syntax?<span class="Apple-style-span" style="background-color: white; font-family: Arial, sans-serif; font-size: 13px; line-height: 18px;">I often wonder about what language is "the best". It's fun to think about these things theoretically.</span><br />
<br />
<span class="Apple-style-span" style="background-color: white; font-family: Arial, sans-serif; font-size: 13px; line-height: 18px;">I believe that there are two relevant ways of expressing meaning in a language: syntax, idioms, and libraries. Syntax is direct meaning and the compiler won't let you write bad syntax. Idioms are common ways of using language features to solve common problems. Libraries solve big problems that requires lots of code, and abstract it into a simple interface, a new syntax.</span><br />
<br />
<span class="Apple-style-span" style="background-color: white; font-family: Arial, sans-serif; font-size: 13px; line-height: 18px;">Discussing just the library side of languages -</span><br />
<span class="Apple-style-span" style="background-color: white; font-family: Arial, sans-serif; font-size: 13px; line-height: 18px;">If a language has very powerful standard syntax, newbies will try to solve all of a language's problems with the language's standard syntactic tools. If a languages has fewer syntactic tools, then the newbie will be forced to either write lots of code to solve simple problems, or to dig into the language's community and common libraries to solve problems.</span><br />
<br />
<span class="Apple-style-span" style="background-color: white; font-family: Arial, sans-serif; font-size: 13px; line-height: 18px;">I argue that it is theoretically better to have a language with less complex syntax and a stronger community-maintained set of libraries that solve common problems. Such as Lisp/Clojure, where the language's entire functionality is constituted of its libraries. This way, if you attempt to solve a problem, you are reinventing very little, but rather, you are subscribing to an existing solution to the problem.</span><br />
<br />
<span class="Apple-style-span" style="background-color: white; font-family: Arial, sans-serif; font-size: 13px; line-height: 18px;">When others come to look at your code later on, they can see what solution you chose to subscribe to. Even better, your solution/library has a name attached to it, and that name can be Googled to find much more information about that solution on that library's website or IRC channel.</span>Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-90255255861468784732011-08-05T08:08:00.044-05:002011-12-15T08:36:24.637-06:00A Pattern for Portable Apex Unit TestsForce.com is a multi-tenant platform, so if your code is running rampant, your neighbors will feel its wrath. It only makes sense that Salesforce requires developers on its platform to write unit tests. When refactoring code, unit tests will tell you when something breaks. You may find that writing your unit tests for a piece of code is too difficult, which is a sign that your design is too complex and will be unwieldy when debugging at later time or by someone else. There are pitfalls when using any unit test framework, but I want to take some time to highlight a few of the more sinister pitfalls that will be invisible to you until you fall into them head-first.<br />
<br />
<b>Field requirements change.</b><br />
The first common design oversight is not keeping in mind the fact that field requirements may change. It's inevitable during the growth and refining of any Salesforce org, if not every software application, that requirements change. This could be that older objects have new fields, new required fields, or changed field definitions. This is fine, right? You aren't hard-coding object creation into each and every unit test, are you? What's that? To shreds, you say? Well, if you do hard-code sObject creation in your unit tests, then you're probably in the majority, so don't worry, it's definitely the most direct and simple way to write them. Let's see how that would look:<br />
<br />
<pre class="brush: java;">static testMethod void testCreateMyObject_shouldSucceed() {
//Create test data
MyObject__c object1 = new MyObject__c(Name = 'TestName', MyField__c = 50);
//Invoke functionality
Test.startTest();
String errorMessage = '';
try {
Database.insert(object1);
catch (DmlException e) {
errorMessage = e.getMessage();
}
Test.stopTest();
//Check results
System.assertEquals('', errorMessage);
}
</pre>
<br />
This looks fine, right? It sure does, so let's copy and paste this unit test about thirty times to test for the various edge-cases. Cool. Now fast-forward a few months to when MyObject__c gets MyField2__c added to it and made to be a required field. Ka-boom! Time to rewrite thirty unit tests!<br />
<br />
<b>Moving code to a different org.</b><br />
Some developers will have to package their code and move it to a different org. If you are one of these developers, I'm sure you've had a multitude of issues with this before. You've got your code working great in your sandbox org, and then applied some fixes after discovering that your code doesn't account for running in an empty org and can't handle a few measly null references. After that experience, you'll be a little nervous when it becomes time to install it into a production org, or worse, a production org with complex workflows and triggers.<br />
<br />
Fears coming true, when installing your wonderful package of joy into the production org, you find that your code may be trying to insert a standard Contact record with the bare minimum number of fields and the Contacts that are being inserted by your unit tests are being rejected. How could you know that the installing org decided to make Contact.FavoriteCRMSoftware__c a required field! Time to rewrite some unit tests!<br />
<br />
<b>Force.com Portable Tests Pattern</b><br />
So what's a dev to do? Now that we have the foresight, what can we do to evade these errors and save ourselves some unit test re-writing? Thinking about it, we find the crux to be this question: How can we successfully insert data when we don't know the conditions for successful insertion at design time?<br />
<br />
The suggested answer: For most cases, isn't it enough to query for an existing record in the database to use? That record is in the org, so it must already have the information required by the org. So grab one, modify the record to the state that your test needs, and go with it. If you need to test insertion, this probably won't work (though maybe you could query for a record, and set the ID of the returned record to null, then insert it as a new record. Hmm...).<br />
<br />
I postulate that the best solution for these issues is to use the following Force.com Portable Tests Pattern (please suggest a better name). It uses a TestObjects class that acts as a kind of record factory that abstracts away an individual unit test's responsibility for creating/querying for a record to use in your test. Check out how we would change the example above:<br />
<br />
<pre class="brush: java;"><span class="Apple-style-span">static testMethod void testCreateMyObject_shouldSucceed() {
//Create test data
//Invoke functionality
Test.startTest();
String errorMessage = '';
try {</span></pre>
<pre class="brush: java;">//This method creates and inserts it for us. We could also use the createMyObject method in this case.
<span class="Apple-style-span" style="font-family: 'Times New Roman'; white-space: normal;"><pre class="brush: java;" style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">MyObject__c object1 = TestObjects.getMyObject();</pre>
</span><span class="Apple-style-span"> catch (DmlException e) {
errorMessage = e.getMessage();
}
Test.stopTest();
//Check results
System.assertEquals('', errorMessage);
}
</span></pre>
<br />
And then use a TestObjects class that will handle the creation/querying for a record to use:<br />
<br />
<pre class="brush: java;">public with sharing class TestObjects {</pre>
<pre class="brush: java;">//Use the get* method if you want to do the query-first object creation.
public static MyObject__c getMyObject() {
MyObject__c myObject = new MyObject__c();
try { //Try to query for the desired record first.
myObject = [SELECT Name, MyField__c FROM MyObject__c LIMIT 1];
}
catch (QueryException e) { // If that fails, then create one.
myObject = TestObjects.createMyObject('TestFeature', 50);
}
return myObject;
}</pre>
<pre class="brush: java;">//If you want to skip the query-first part, just call the create* method.</pre>
<pre class="brush: java;">//If you discover a required field in the org, you only need to change this method, not every single unit test.
public static MyObject__c createMyObject(String name, integer myField) {
MyObject__c myObject = new MyObject__c(
Name = name,
MyField__c = myField);
Database.insert(myObject);
return myObject;
}
}
</pre>
<br />
What do you think of this structure? Do you see anything I'm missing? Can it be made more robust? Should we change the name of the TestObjects class to something else? Leave a comment below.Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0tag:blogger.com,1999:blog-8410610130228839275.post-14180411116086578422011-07-18T14:12:00.001-05:002011-12-15T08:36:33.020-06:00Alternative to the Average AJAX ActionStatusVisualforce offers some great shortcuts that makes for faster development and a more consistent user experience. It is these tools to which Salesforce refers when they proudly advertise the short development cycles that programmers experience when working on the Force.com platform. It makes my life as a developer easier, and I really appreciate that. Sometimes, however, you want to add a slightly more advanced feature, one that requires pushing your code off of and beyond the tracks that the platform provides you.<br />
<br />
One area that Visualforce has made very simple is Ajax. Adding just one or two Visualforce tags and attributes can produce a responsive page that uses Ajax for updating information. One such tag is the <a href="http://www.salesforce.com/us/developer/docs/pages/Content/pages_compref_actionStatus.htm">actionStatus</a> tag. Ajax operation will work just fine if you don't use this tag, but the user won't know that an Ajax update is, in fact, taking place. Add this tag to the page and reference it from the same place that called the Ajax to tell the user, "Hey, wait a second. We've got some data coming from the server, so you should wait for it to arrive before continuing." Here's the simplest way to use this tag, taken from the SF documentation for <a href="http://www.salesforce.com/us/developer/docs/pages/Content/pages_compref_commandButton.htm">commandButton</a>:<br />
<br />
<pre class="”brush:" html”=""><apex:page controller="exampleCon">
<apex:form id="theForm">
<apex:outputText value="{!varOnController}"/>
<apex:commandButton action="{!update}" rerender="theForm" value="Update" id="theButton"/>
</apex:form>
</apex:page>
</pre>
<br />
Code Review: In this example, the secret sauce that adds the Ajax is the action and rerender attributes on the commandButton tag. Clicking on the button will send an Ajax message back to the server that will get the latest value of varOnController, which is a property of the controller object that we assume is changing with time. When the message returns to the page, the element that the outputText element creates will be replaced with the new value.<br />
<br />
Now, let's imagine that it takes a few seconds for the controller to get this updated value back to us. The user will sit there, staring at the page, wondering if the button-click didn't work. We should give him some feedback:<br />
<br />
<pre class="”brush:" html”=""><apex:page controller="exampleCon">
<apex:form id="theForm">
<apex:outputText value="{!varOnController}"/>
<apex:actionStatus startText=" (working...)" stopText=" (done)" id="updateStatus"/>
<apex:commandButton action="{!update}" rerender="theForm" status="updateStatus" value="Update" id="theButton"/>
</apex:form>
</apex:page>
</pre>
<br />
Code Review: This code should be the same as the first one, but this time we've added an actionStatus tag, which we attach to the commandButton's Ajax call by using the status attribute on commandButton. With this simple change, when the Ajax call begins, ( working...) will appear next to the outputText, and when it finishes, it will change to (done). This is a pretty good quick-and-done solution for user feedback.<br />
<br />
The Problem: But what if we had a page that was heavy on form inputs, one that requires lots of user interaction? Is there a way to tell the user to wait before filling in more fields until the page updates? Well, I've found two possible solutions. Neither is perfect, but they both get the job done. What we want to achieve is to disable all inputs and buttons on the form while the page update is taking place.<br />
<br />
1) The first way to accomplish this is to create a drop-down curtain effect, which drops a see-through curtain to capture clicks over the form. Here's how solution number 1 works:<br />
<br />
<pre class="”brush:" html”=""><apex:pageBlockSection title="Form 1" id="formSection" collapsible="false">
<div id="loadingCurtain">
<apex:inputField value="{!myObject.Name}"/>
<apex:inputField value="{!myObject.Address}"/>
<apex:inputField value="{!myObject.OtherFields}"/>
<apex:commandButton action="{!update}" rerender="formSection" onclick="showLoadingDiv();" oncomplete="hideLoadingDiv();" value="Update" id="theButton"/>
</apex:pageBlockSection>
</pre>
<br />
Code Review: This is just a pageBlockSection that is contained inside form tags, which aren't shown. This one has three fields, though it could have more, and a button to post it back to the server. Instead of using the actionStatus tag, we're going to call our own Javascript functions to manipulate the loading curtain:<br />
<br />
<pre class="”brush:" js”="">$j = jQuery.noConflict();
//This escapes SF-created IDs
function esc(myid) {
return '#' + myid.replace(/(:|\.)/g,'\\\\$1');
}
function showLoadingDiv() {
var divToScreenEsc = esc("{!$Component.hardCostSection}");
var newHeight = $j(divToScreenEsc + " .pbSubsection").css("height");//Just shade the body, not the header
$j("#loadingDiv").css("background-color", "black").css("opacity", 0.35).css("height", newHeight).css("width", "80%");
}
function hideLoadingDiv() {
$j("#loadingDiv").css("background-color", "black").css("opacity", "1").css("height", "0px").css("width", "80%");
}
</pre>
<br />
I'm using jQuery here. I've had bad experiences trying to use vanilla Javascript, and I've vowed to never use it again. jQuery gives consistent results across browser DOMs and across Javascript implementations itself. Note the esc function. This is necessary to escape the non-standard character in SF-created IDs for use in jQuery selectors, and was a solution created by Wes Nolte. See his blog post on the solution <a href="http://th3silverlining.com/2010/02/17/visualforce-ids-in-jquery-selectors/">here</a>. Beyond that, we're just selecting the loadingDiv and setting some styles, pretty simple.<br />
<br />
<br />
2) The second solution is to avoid adding another element to the DOM, and just using Javascript to disable all selectable fields and buttons in the form:<br />
<br />
<pre class="”brush:" html”=""><apex:pageBlockSection title="Form 1" id="formSection" collapsible="false">
<apex:inputField value="{!myObject.Name}"/>
<apex:inputField value="{!myObject.Address}"/>
<apex:inputField value="{!myObject.OtherFields}"/>
<apex:commandButton action="{!update}" rerender="formSection" onclick="showLoadingDiv2();" oncomplete="hideLoadingDiv2();" value="Update" id="theButton"/>
</apex:pageBlockSection>
</pre>
<br />
Code Review: This is the same VF snippet as above, minus the curtain div. This time, we'll use Javascript to set the styles of the form elements themselves.<br />
<br />
<pre class="”brush:" js”="">function showLoadingDiv2() {
var divToScreenEsc = esc("{!$Component.formSection}");
$j(divToScreenEsc).css("opacity", "0.35");
$j(divToScreenEsc + " input, " + divToScreenEsc + " select").attr("disabled", "true");
}
function hideLoadingDiv2() {
var divToScreenEsc = esc("{!$Component.formSection}");
$j(divToScreenEsc).css("opacity", "1");
$j(divToScreenEsc + " input, " + divToScreenEsc + " select").attr("disabled", "false");
}
</pre>
<br />
This will disable all input and select elements in the specified form section as well as turn the entire section slightly transparent by setting the opacity to 35%.<br />
<br />
That's it, really. Two simple solutions that I came up with to give a better user experience. Improve up it! I'm not a pro at HTML and Javascript, and maybe you can help make it better - leave a comment!<br />
<br />
Before concluding this post, I want to mention the more elegant version of this that you should investigate:<br />
Keep in mind that the actionStatus tag has both onstart and onstop attributes, from which you can call the necessary Javascript functions from there. The advantage to doing it this way is that the functionality is again tied to the actionStatus tag . This adds a layer of abstraction between the functionality and its visual status indicator. You would call the Javascript functions from the actionStatus tag instead of the Ajax initiating tags, such as a commandButton, keeping your code DRY and happy.Alexhttp://www.blogger.com/profile/12271231145132908015noreply@blogger.com0