Categories
Other Programming

jQuery Map Function

Sometimes when you are making a web application you neeed to search some data. A lot of the time, it exists as an array in memory. I recently came across such a problem on a Phonegap project I’m working on. The app has to work offline, so my sorting needed to take place in Javascript. Since we’re using jQuery with this app, I decided to play with jQuery’s Map function. Map takes your array and performs an operation over each value in it. This was super handy in my case because it allowed me to search through my data at fast pace, without having to make an ajax call out to my database to do a search with MySQL.

Example:

var searchTerm = $("#searchField").val().toUpperCase();
var results = $.map(self.defaultProductList, function(product,i) {
	if(product.name.toUpperCase().search(searchTerm) != -1) {
		return product;
	}
});

searchTerm is the value that I’m searching for. .map takes an array as it’s first argument, and then a function as it’s second. I created an anonymous function that checks to see if the search term is in the current object. If it is, I return the value so it can be added to the final array. All in all, an excellent way of searching through data when you don’t have the luxury of a database to query.

Categories
Other

Remove undefined from a Javascript object

I’ve been doing a fair amount of javascript programming lately, and I found myself needing to remove a nested object from an object.  Doing this is easy enough with the “delete” command, but it leaves you with annoying “undefined”s all over.  To get around that, I scoured the internet for a way to remove them easily.  Turns out that if efficiency isn’t a problem, it’s easier to drop the right objects into an array and then re-assign it.

var tmpArray = new Array();
for(el in self.orderData.data.items) {
     if(self.orderData.data.items[el]) {
          tmpArray.push(self.orderData.data.items[el]);
     }
}
self.orderData.data.items = tmpArray;

Easy and pie.

Categories
PHP Programming Wordpress Development

Caching WordPress Data with the Transients API

If you’re a plugin or theme developer, there may come a time when you need to execute a long running operation. It doesn’t need to be anything complicated, but something as simple as fetching a Twitter feed can take a significant amount of time. When you come across these types of situations, it’s handy to be able to store the data on your own server and then fetch a new copy of it every X hours. This is called caching, and WordPress convieniently comes packages with an excellent caching API called Transients .
Wordpress Transients API

The Transients API is surprisingly simple to use. In fact, it’s very much like using set_option and get_option except with an expiration time. If you aren’t familiar with caching at all, here’s the general workflow:

  1. If the data exists in the cache and isn’t expired, get it.
  2. If the data doesn’t exist in the cache or is expired, perform the necessary actions to get the data.
  3. Store the data in the cache if it doesn’t already exist or is expired.
  4. Continue from here using data.

When attempting to use the Transients API for caching, there are three functions that you need to be aware of: set_transient, get_transient, and delete_transient.

  • set_transient($identifier, $data, $expiration_in_minutes): This function stores your data into the database. The identifier is a string that uniquely identifies your data. Your data can be any sort of complex object, so long as it is serializable. The expiration is how long your want your data to be valid (ex: 12 hours would be 60*60*12).
  • get_transient($identifier): This retrieves your data. If the data doesn’t exist or the expiration time has passed, false is returned. Otherwise, the same data you stored will be returned.
  • delete_transient($identifier): This will delete your data before it’s expiration time. This is handy if you are storing post-dependent data because you can hook it into the save action so that every time you save a post, your cached data is cleared.

Now that we’ve covered the basics, how about a quick example?

if (false === ( $my_data = get_transient('super_expensive_operation_data') ) ) {
     $my_data = do_stuff();
     set_transient('super_expensive_operation_data', $my_data, 60*60*12);
}
 
echo $my_data;
 
function do_stuff() {
     $x = 0;
     for($i = 0; $i != 999999999; $i++) {
          $x = $x * $i;
     }
     return $x;
}

The example is pretty straight forward. We first check to see if there is a cached copy of the data, if not, we fetch the data from the “do_stuff” function, and store it in the database. Simple, right?

One of the benefits of using the Transients API (aside from speeding your site up) is that plugins like WP Super Cache or WP Total Cache will auto-magically cache your data into memcached if you have it set up. For you, this means an even faster site! If you have any questions about caching techniques or the Transients API, leave a comment and I’d be happy to help.

Categories
PHP Programming

Tracking Email Open Time with PHP

You can download my code for this article here.
Some time ago I had great aspirations of launching a web company that does email tracking and analytics. One of the things that I really wanted to figure out but wasn’t well documented on the web was how to track how long a user had a particular email open. When a company like MailChimp wants to track emails that they are sending out, they put a small image in the email called a “beacon”. When the user opens the email, the beacon image is requested from the server. The server sends the image, but not before it gathers information about the computer requesting it. The works great for checking if an email was opened, or what platform the person is on, but it doesn’t work at all for determining how long the email was open for.

One option that came to mind for checking the open time of an email was long polling.  Long polling (in this case) would use Javascript to contact the server every X seconds after the email was loaded.  Using those requests, it’d be trivial to find out how long it was open for.  Unfortunately, most (if not all) email clients don’t allow the execution of Javascript within emails, so that idea was completely sank.  The only option I had left was to use the beacon image to somehow determine open time.

The only option I could think of for using the image beacon without any Javascript was to redirect the image back to itself.  After much trial and error, I came up with the following.

//Open the file, and send to user.
$fileName = "../img/beacon.gif";
$fp = fopen($fileName, "r");
header("Content-type: image/gif");
while(!feof($fp)) {
    //Do a redirect for the timing.?
    sleep(2);
    if(isset($_GET['clientID'])) {
    	$redirect = $_SERVER['REQUEST_URI'];
    } else {
    	$redirect = $_SERVER['REQUEST_URI'] . "?clientID=" . $clientID;
    }
    header("Location: $redirect");
}

So what’s happening in this code?  First of all, we’re opening a small GIF file that we’re going to pretend to send to the user.  The second step is to send a header for an image file to the user so that their mail client expects one to be delivered.  This step is important because if the header isn’t sent, the browser/mail client will close the connection.  After that, you make the request sleep for a few seconds (as few or as many as you want depending on how granular you want your timing data to be) and then redirect back to the same page.  The “if” statement within the while loop is there so you can identify incoming requests and log the data accordingly.

So there you have it.  If you’ve ever wondered how people track the open time of an email, it’s probably a method very similar to this.  The only caveat to this method is that it relies on the user loading images in an email.  However, if you have a large enough sample you can just take the average open time from the users that did open it and be fairly confident with that.

Note: There has been some discussion over on Stack Overflow about this article. You may find it helpful.

 

If you liked this article, then you may like:

  • PHP Dark Arts: Multi-Processing (Part 1)
  • PHP Dark Arts: Multi-Processing (Part 2)
  • PHP Dark Arts: Shared Memory Segments (IPC)
  • PHP Dark Arts: Semaphores
  • PHP Dark Arts: GUI Programming with GTK
  • PHP Dark Arts: Sockets
  • PHP Dark Arts: Daemonizing a Process
  • Categories
    Other

    Is it a bubble or something else?

    It has all happened before, and will happen again…

    Back in the late 90’s, we experienced an economic bubble of immense proportions. The internet (read: The World Wide Web) was just starting to gain mainstream acceptance, which is when the gold rush began. Companies with no real business plan, and no way of making profits were securing millions of dollars in funding. Beyond funding, some of these companies were getting bought for BILLIONS of dollars. For instance, The Learning Company was purchased by Mattel for over $3 billion in 1999, but was sold for only $27 million in 2000. While the company clearly had some value, it was overvalued beyond any reasonable price. This is the epitome of the of “Dot-Com” bubble.

    Over the past few months, there has been a lot of discussion on Hacker News about the possibility of another “Dot-Com” bubble happening right now. A lot of people think that we are winding up to another bubble, but there is also a fairly large amount of people who think that this time is different. I fall in the the latter group, and here’s why.

    Starting with YCombinator, a new philosophy on web startups emerged: lean startups. In a nutshell, your startup is given a small amount of money (enough to live frugally on for a few months) and mentorship. The most important part of programs like YCombinator is the mentorship. You get access to seasoned investors, business people, and founders that help you realize your idea’s potential. The upside to bringing a company to fruition this way is that your startup costs are low, and you will know very quickly if you can become profitable. During the 1st bubble, anybody with an idea and a web page could get millions in funding. No market validation required, just an idea. This time around, you actually need to have a plan. You need to have traction. You need to be profitable. Sure, some companies are getting over valued (*cough* Facebook *cough*), but that happens whether we’re in a bubble or not.

    The important thing to take away from this is to look at what companies are getting serious funding (>$500k) and what companies are making nice (fat) exits. Are they good companies? Would you use their product? Would someone you know use their product? Are they profitable? Do they have a user base? If you can answer “yes” to most of these questions, we probably aren’t in a bubble. We’re in something else. A new economy? An information economy? Well, we already have an information economy, so what now? We’re transforming the way we do business and interact with each other. Instead of doing things yourself, why not let somebody else do it for you? (hosting: Heroku). Keeping in contact with people is hard, why not let Facebook do it for you?

    I’m not sure where all this is leading, but I’m fairly positive it’s not a bubble. It’s something different. It’s a transformation of our economy. To what, I don’t know. But it is changing, and it’s going to touch every single one of our lives sooner or later.

    Categories
    PHP Programming Wordpress Development

    WordPress Development as a Team

    At my day job I’m really the only person that knows how to write WordPress plugins, so when I write one it’s usually sand-boxed on my machine where nobody can touch it. However, in a side endeavor I’m part of we have a team of 3 people developing on one plugin. As I’m the most experienced plugin developer amongst our team, I was tasked with coming up with a development style and plugin architecture that would work for us.

    Development Style

    Everyone will be running a local copy of WordPress and making their changes to the plugin locally. The plugin itself will be under version control using Git, and developers will push/pull changes from either a self-hosted Git server or Git Hub. Database schema will be tracked in a file called schema.sql. When someone makes a change to the schema, it goes into that file at the bottom with a comment about why the schema changed. We’ll being jQuery as our Javascript framework of choice, and we’ll be writing all of our Javascript in CoffeeScript (see my previous entries).

    Plugin Architecture

    The more difficult aspect of developing this plugin as a team is the sheer size of the plugin. Realistically this could probably be split into about 6 different plugins by functionality, but we want to keep everything together in one tidy package. To illustrate the architecture, I made a quick drawing.

    The first layer of the plugin is essentially a wrapper. It initializes the ORM that we are using to access the database (we are using a separate database for this plugin’s data), and includes the wrapper class. The wrapper class is where developers drop their sub-plugin include file and instantiate it’s main object. For instance, for each sub plugin there will probably be two classes instantiated in the wrapper. One being admin related functionality, and the other being for front-end display functionality. My thinking with this architecture was that we could all work on separate sub-plugins without crossing paths too frequently. This also allows us to separate the different functionality areas of the plugin in a logical manner. The other benefit to architecting the plugin like this is that it will be very easy to port to a different architecture in the future. I’m well aware that WordPress probably isn’t the best tool for the job, but it is the best tool for the team with the deadline that we have.

    Thoughts

    While thinking about WordPress Plugin Architecture, I cruised the source code of a lot of plugins and it seems that everyone goes about it in a different way. If you’ve ever developed a large-scale plugin with a team, how did you go about doing it? Did you run in to any problems that you didn’t foresee at the beginning of the process?

    Categories
    PHP Programming

    Getting Web Scale with Memcached

    The web is huge, and there are a lot of people on it. Day and night, millions upon millions of people are on the web surfing, commenting, and contributing. Normally your blog gets a few hundred visitors a day ( a few thousand on a good day ), but what happens when that number increases? Can your database server handle all that load? Will Apache come screeching to a halt due to all of the requests? The answer is probably yes, unless you implement some form of caching. Many years ago this wasn’t a huge problem, but as the web and it’s user base has grown, so has the problem of “web scale”.

    Memcached

    Memcached is a pretty simple concept. Just as the name implies, it’s a caching system that stores stuff in memory. That’s really all you need to know to get started. If you’re interested in learning more, check out the Memcached home page.

    PHP Memcache

    As this is mostly a PHP blog, I’m going to show you how to use Memcached with the PHP Memcache module. While this tutorial is language specific, the concepts here can be applied to any language to increase the speed of your web pages. That being said, the first step is to get Memcached installed on your machine. There are a ton of tutorials out there on the web for this, so I’m going to leave that as an exercise for you. Once that’s installed, you should check out my guide for getting the PHP Memcache module installed on XAMPP, that way you can run this tutorial locally.

    Step 1: Make the connection

    This step is pretty straight forward. If you can’t connect to your caching server, you can’t cache. If the connection is successful, continue trying to cache. Otherwise, just query your database as normal.

    $memcache = new Memcache;
    $memcache->connect("my.memcached.server", 11211);

    Step 2: Cache something

    For this step, the only potential “gotch-ya” is that the your identifier must be unique, and time to expire is in seconds.

    $myValue = "hello world!";
    $memcache->set("Hello World", $myValue, false, 60*60*24);

    Step 3: Retrieve an item from the cache

    $myValue = $memcache->get("Hello World");
    echo $myValue;

    Step 4: Putting it all together

    So how does all this work in conjunction with your web app? The basic workflow for using caching is the following:

    1. Does my item exist in cache> (This satisfies determining if a connection to the cache has been made as well.)
    2. If so, get the item and store in a variable.
    3. If not, get the item from the database.
    4. Store the item for later use.
    $memcache = new Memcache;
    $memcache->connect("my.memcached.server", 11211);
     
    $arrayVals = $memcache->get("My Identifier");
    if(!$arrayVals) {
            //Note: This assumes that the data in the table doesn't change
           // and that it is fairly small in size.
    	$query = "SELECT * FROM myTable";
    	$result = mysql_query($query);
    	while($row = mysql_fetch_array($result)){
    		$arrayVals[] = $row;
    	}
    	$memcache->set("My Identifier", $arrayVals, false, 60*60*24);
    }
     
    foreach($arrayVals as $val) {
    	print_r($val);
    }

    If you’re following carefully, you can see that the first time through the data will get pulled out of the database. However, for the next 24 hours the data will be coming from the Memcached server. It’s little tricks like this that can help your site survive being featured on Reddit. Moral of the story: If your site is slow because of volume, try caching almost everything and you should have noticeable improvements.

    Categories
    Other

    Switching from Google to Duck Duck Go

    For the longest time I’ve been hearing the praises of a little search engine called Duck Duck Go amongst the Hacker News crowd. Yesterday, I finally decided to take the plunge and set it as Chrome’s default search engine. After a day of solid use, here are some of my observations:

    • The search results are good: While Google has been taking time to improve their results lately, it’s refreshing to see original content get ranked higher than web scrapers. In fact, the web scrapers have a tendency to not show up at all on DDG.
    • Lots of documented goodies: I’m still getting my feet wet with DDG, but the ridiculous amount of goodies is going to make things a lot more enjoyable.
    • I like not having page previews by default: I’m not sure if DDH even supports this, but I absolutely HATE having preview panes pop up in Google by default. Can it be turned off? Yes. Am I too lazy to do it? Yes.
    • Directly search other sites: You can search other sites directly, which is a nice feature. Try “!amazon Founders at work”.
    • They don’t track you: You know that feeling you get when you think someone is following you down a dark alley at night? That’s the feeling you should get using Google. They track everything. I don’t like being tracked, so DDG is probably going to become my permanent search engine. Read
    • Instant Answers: Sometimes you don’t need to click-through to a page. For instance, search “jquery” and you get a handy little box that tells you what jQuery is, and where you can get more info about it. For someone new to jQuery, that little bit of information could help them make a more informed click to learn more.
    • I miss instant search: I would really like to see an option for instant searching on DDG. Being able to refine your search results letter by letter was really handy.
    • I miss other Google service integration: Searching for “coffee near 49503” would show a map with coffee houses on it in Google. In DDG, my results aren’t nearly as useful. I hope that some sort of map integration is in there future, because it would stop me from switching back to Google to use their map service.

    What do you like about DDG? What features do you wish it had?