Monthly Archives: September 2013

Quail

Quail

I wish I’d had a better camera with me at the time; this was as close as they’d let me get.

End of the Chayote

Chayote End

The chayote plant got damaged by animals and killed by drought.  Maybe I’ll try again next year.

 

Software Development Project Estimating

Surprisingly, I’ve seen many long guides to how to estimate software development projects but they all boil down to three steps:

  • Thoroughly design the software
  • Break down the development into small parts that will each take between a couple of hours and a couple of days to implement (the shorter side is better than the longer side)
  • Add up the length of all the parts

Project Schedule

Managers will often try to negotiate down the amount of time estimated.  Their purpose is to have the developer commit to the duration and make personal sacrifices (overtime, skipping meals etc) to ensure that the commitment is met.  This is not a good a good way to motivate people.  If the developer fails to meet the commitment they will be demoralized from frustration and failure. In the future they will have lower expectations and poorer results.

Estimates should be treated as estimates, not promises.  For example if I ask you to estimate what the high temperature will be tomorrow is it the same as promising the what the high temperature will be?  Obviously not because it’s not under your control.  Similarly, what would you expect the developer to do to control the duration of the effort? Rhetorical question..

For a much better discussion see here.

How to pass data from one page to another in web applications

Let’s see how a page in a web application can pass parameters to a different page, when it’s appropriate to use each of them, and what types of problems you might run into.  Secondly, I ask a related question when interviewing web programmers and I may not be alone in this so it’s a good idea to know.

GET

The most obvious way to pass information to another page is to link to it with a URL parameter.  Such as: <a href=”page2.aspx?parameter1=802&parameter2=701″>Click here</a>.   This creates a GET request to page2.aspx.  GET requests are appropriate for when the user, by requesting the page, will simply be reading information.  The server may send directives in the HTTP headers to ask the browser to cache the page.  If the browser aggressively caches pages or the HTTP headers ask the browser to cache, the user may not see the most up-to-date information on the second page.  This is especially dangerous when the user is performing updates to the data and expects to see results on the next page.  This issue can be handled by adding another URL parameter that has a non-repeated value in it.  Good choices are a random number (generated through JavaScript), the current date and time including seconds, or a sequence from a database.

You should avoid making links that cause actions to occur such as deleting a record.  Here’s a classic piece of code that works well but isn’t a good idea: <a onclick=”return confirm(‘Are you sure you want to delete?’);” href=”Delete.aspx?record_id=123″>[Delete]</a>.  The most important reason for this is that the W3C says that this is wrong.  Programmers, unfortunately, rarely follow standards if not following them is more convenient in the short-run.  Since this code works just fine and is very easy to write, why not use it?  One word answer: spiders.  If your page is being indexed by a search engine (or perhaps a vulnerability testing tool) it will try to follow every link.  And it should since the W3C says links only cause reads, not writes.  Most of the time in practice this won’t be an issue because deleting a record should require authentication (which should be verified on the page doing the delete), and your search engine won’t be authenticated.  Nevertheless this issue presents a very real threat to your data so just follow the standard.

Another potential with GET requests is that they have a limited length depending on the browser.  IE’s limit is 2k characters.  If you restrict yourself to just using GETs for data reads and the page receiving the parameter queries everything it can from the database, you shouldn’t run into trouble with the length limit.

POST

Compared to the issues with GET,POST works well but quite not as convenient.  It would be very unusual for a browser to cache a POST request, though if it did happen, you could use the changing parameter trick.  POST also doesn’t have a length limit so that isn’t an issue.  POST should only be used for writing data to the database, for example inserting a record.

<form method="post" action="page2.aspx">
Name: <input name="name">
<input type="submit">
</form>

Our first concern with post is that if the user goes to your page2, then proceeds onward through a link on that page, then clicks back, he may get a prompt asking if he wants to refresh the page.  It is a nuisance for them and doesn’t provide a good user experience.  A good practice would be that once the post to page2 is done, you send the user a server side redirection (which causes the user to do a GET to a third page, or potentially even to page2.  In this case page2 needs to have logic to tell whether this is a GET or a POST request).

A second concern is that a page that is requested using a POST cannot be bookmarked.  Since the POST request is supposed to perform a permanent operation, repeating it would not be good.  Again, the redirection following the post is generally the best approach.

COOKIES

Generally, you can get by on GET and POST.  Cookies are much less convenient way to pass data but still very important.  Cookies are used behind the scenes for almost all session tracking.  Sessions are nearly critical to make authentication work.  Unlike GET and POST where the data is exposed to the user, as long as the session token in the cookie isn’t compromised, the user has no access to the data so it is safe from tampering.  The problem is that the session’s token is stored in a cookie that is (usually) destroyed when the browser is closed.

Cookies can be used directly through javascript or through HTTP headers.  Most server-side programming languages expose cookies in a relatively convenient way.  They are limited to very very small amounts of data.

My recommendation is to use cookies and sessions exclusively for authentication information and not to consider them permanent even if you set a long expiration date.  Non-authentication related data must be kept to a very small amount because of size limit.  Users do have access to cookies so they should not be considered invulnerable to tampering unless you use encryption related measures.

Be careful of sessions for non-authentication data because the data is stored on the server, usually in RAM, although databases are an option.  When users use multiple windows context data in the two windows could be confused causing unexpected behavior for the end user.

HTML5 LOCAL STORAGE

This is a new method of storage that is making its way into modern browsers.  Once IE7 is out of the way it can be considered standard.  It is capable of storing and transmitting considerably larger amounts of data than GET or cookies but considerably less than a relational database.  There are several valuable scenarios for using local storage:

  • Avoid the size limitation of cookies
  • Make a rich web application and improve performance by allowing the browser to cache server data instead of re-querying and re-transmitting it.
  • In niche scenarios you may be able to avoid centralized storage costs by distributing storage to users.

Like cookies, the information stored in local storage shouldn’t be transitory in nature and should not be intended for a specific subsequent page.  It is site-wide and should be treated as such.  GET and POST should be used when the consumer of the data is just a single subsequent page.