I’m a nerd. I regularly use Ubuntu, and just bought a Windows Phone 7 instead of an iPhone. With Halloween right around the corner, I thought it was time to bump up my nerd credentials.
I now present to you… the Dual-Boot Pumpkin!
I’m a nerd. I regularly use Ubuntu, and just bought a Windows Phone 7 instead of an iPhone. With Halloween right around the corner, I thought it was time to bump up my nerd credentials.
I now present to you… the Dual-Boot Pumpkin!
So you’ve finally discovered the wonder that is the HTML5 Canvas element. Great! If you’re like me, the first thing I wanted to do with it was doodle on it. I eventually worked out how to map touch/mouse events to the canvas and draw lines, but I wanted to save my creations!
As it turns out, the Canvas element has a method called toDataURL(), which base64 encodes the entire Canvas element and returns it as a string. From there, you can just pump it over to a server and handle it from there. Here’s the step-by-step, which assumes you are also running jQuery on your site.
var data = document.getElementById("myCanvasID").toDataURL(); $.post("process.php", { imageData : data }, function(data) { window.location = data; }); |
var data = document.getElementById("myCanvasID").toDataURL(); $.post("process.php", { imageData : data }, function(data) { window.location = data; });
$data = substr($_POST['imageData'], strpos($_POST['imageData'], ",") + 1); $decodedData = base64_decode($data); $fp = fopen("canvas.png", 'wb'); fwrite($fp, $decodedData); fclose(); echo "/canvas.png"; |
$data = substr($_POST['imageData'], strpos($_POST['imageData'], ",") + 1); $decodedData = base64_decode($data); $fp = fopen("canvas.png", 'wb'); fwrite($fp, $decodedData); fclose(); echo "/canvas.png";
Note: The first line of this script removes the header information that is sent with the encoded data.
Thats all there is to it. You can now easily save your HTML 5 awesomeness.
Sometimes when you are making a web application you neeed to search some data. A lot of the time, it exists as an array in memory. I recently came across such a problem on a Phonegap project I’m working on. The app has to work offline, so my sorting needed to take place in Javascript. Since we’re using jQuery with this app, I decided to play with jQuery’s Map function. Map takes your array and performs an operation over each value in it. This was super handy in my case because it allowed me to search through my data at fast pace, without having to make an ajax call out to my database to do a search with MySQL.
Example:
var searchTerm = $("#searchField").val().toUpperCase(); var results = $.map(self.defaultProductList, function(product,i) { if(product.name.toUpperCase().search(searchTerm) != -1) { return product; } }); |
var searchTerm = $("#searchField").val().toUpperCase(); var results = $.map(self.defaultProductList, function(product,i) { if(product.name.toUpperCase().search(searchTerm) != -1) { return product; } });
searchTerm is the value that I’m searching for. .map takes an array as it’s first argument, and then a function as it’s second. I created an anonymous function that checks to see if the search term is in the current object. If it is, I return the value so it can be added to the final array. All in all, an excellent way of searching through data when you don’t have the luxury of a database to query.
I’ve been doing a fair amount of javascript programming lately, and I found myself needing to remove a nested object from an object. Doing this is easy enough with the “delete” command, but it leaves you with annoying “undefined”s all over. To get around that, I scoured the internet for a way to remove them easily. Turns out that if efficiency isn’t a problem, it’s easier to drop the right objects into an array and then re-assign it.
var tmpArray = new Array(); for(el in self.orderData.data.items) { if(self.orderData.data.items[el]) { tmpArray.push(self.orderData.data.items[el]); } } self.orderData.data.items = tmpArray; |
var tmpArray = new Array(); for(el in self.orderData.data.items) { if(self.orderData.data.items[el]) { tmpArray.push(self.orderData.data.items[el]); } } self.orderData.data.items = tmpArray;
Easy and pie.
If you’re a plugin or theme developer, there may come a time when you need to execute a long running operation. It doesn’t need to be anything complicated, but something as simple as fetching a Twitter feed can take a significant amount of time. When you come across these types of situations, it’s handy to be able to store the data on your own server and then fetch a new copy of it every X hours. This is called caching, and WordPress convieniently comes packages with an excellent caching API called Transients .
The Transients API is surprisingly simple to use. In fact, it’s very much like using set_option and get_option except with an expiration time. If you aren’t familiar with caching at all, here’s the general workflow:
When attempting to use the Transients API for caching, there are three functions that you need to be aware of: set_transient, get_transient, and delete_transient.
Now that we’ve covered the basics, how about a quick example?
if (false === ( $my_data = get_transient('super_expensive_operation_data') ) ) { $my_data = do_stuff(); set_transient('super_expensive_operation_data', $my_data, 60*60*12); } echo $my_data; function do_stuff() { $x = 0; for($i = 0; $i != 999999999; $i++) { $x = $x * $i; } return $x; } |
if (false === ( $my_data = get_transient('super_expensive_operation_data') ) ) { $my_data = do_stuff(); set_transient('super_expensive_operation_data', $my_data, 60*60*12); } echo $my_data; function do_stuff() { $x = 0; for($i = 0; $i != 999999999; $i++) { $x = $x * $i; } return $x; }
The example is pretty straight forward. We first check to see if there is a cached copy of the data, if not, we fetch the data from the “do_stuff” function, and store it in the database. Simple, right?
One of the benefits of using the Transients API (aside from speeding your site up) is that plugins like WP Super Cache or WP Total Cache will auto-magically cache your data into memcached if you have it set up. For you, this means an even faster site! If you have any questions about caching techniques or the Transients API, leave a comment and I’d be happy to help.
You can download my code for this article here.
Some time ago I had great aspirations of launching a web company that does email tracking and analytics. One of the things that I really wanted to figure out but wasn’t well documented on the web was how to track how long a user had a particular email open. When a company like MailChimp wants to track emails that they are sending out, they put a small image in the email called a “beacon”. When the user opens the email, the beacon image is requested from the server. The server sends the image, but not before it gathers information about the computer requesting it. The works great for checking if an email was opened, or what platform the person is on, but it doesn’t work at all for determining how long the email was open for.
One option that came to mind for checking the open time of an email was long polling. Long polling (in this case) would use Javascript to contact the server every X seconds after the email was loaded. Using those requests, it’d be trivial to find out how long it was open for. Unfortunately, most (if not all) email clients don’t allow the execution of Javascript within emails, so that idea was completely sank. The only option I had left was to use the beacon image to somehow determine open time.
The only option I could think of for using the image beacon without any Javascript was to redirect the image back to itself. After much trial and error, I came up with the following.
//Open the file, and send to user. $fileName = "../img/beacon.gif"; $fp = fopen($fileName, "r"); header("Content-type: image/gif"); while(!feof($fp)) { //Do a redirect for the timing.? sleep(2); if(isset($_GET['clientID'])) { $redirect = $_SERVER['REQUEST_URI']; } else { $redirect = $_SERVER['REQUEST_URI'] . "?clientID=" . $clientID; } header("Location: $redirect"); } |
//Open the file, and send to user. $fileName = "../img/beacon.gif"; $fp = fopen($fileName, "r"); header("Content-type: image/gif"); while(!feof($fp)) { //Do a redirect for the timing.? sleep(2); if(isset($_GET['clientID'])) { $redirect = $_SERVER['REQUEST_URI']; } else { $redirect = $_SERVER['REQUEST_URI'] . "?clientID=" . $clientID; } header("Location: $redirect"); }
So what’s happening in this code? First of all, we’re opening a small GIF file that we’re going to pretend to send to the user. The second step is to send a header for an image file to the user so that their mail client expects one to be delivered. This step is important because if the header isn’t sent, the browser/mail client will close the connection. After that, you make the request sleep for a few seconds (as few or as many as you want depending on how granular you want your timing data to be) and then redirect back to the same page. The “if” statement within the while loop is there so you can identify incoming requests and log the data accordingly.
So there you have it. If you’ve ever wondered how people track the open time of an email, it’s probably a method very similar to this. The only caveat to this method is that it relies on the user loading images in an email. However, if you have a large enough sample you can just take the average open time from the users that did open it and be fairly confident with that.
Note: There has been some discussion over on Stack Overflow about this article. You may find it helpful.
It has all happened before, and will happen again…
Back in the late 90’s, we experienced an economic bubble of immense proportions. The internet (read: The World Wide Web) was just starting to gain mainstream acceptance, which is when the gold rush began. Companies with no real business plan, and no way of making profits were securing millions of dollars in funding. Beyond funding, some of these companies were getting bought for BILLIONS of dollars. For instance, The Learning Company was purchased by Mattel for over $3 billion in 1999, but was sold for only $27 million in 2000. While the company clearly had some value, it was overvalued beyond any reasonable price. This is the epitome of the of “Dot-Com” bubble.
Over the past few months, there has been a lot of discussion on Hacker News about the possibility of another “Dot-Com” bubble happening right now. A lot of people think that we are winding up to another bubble, but there is also a fairly large amount of people who think that this time is different. I fall in the the latter group, and here’s why.
Starting with YCombinator, a new philosophy on web startups emerged: lean startups. In a nutshell, your startup is given a small amount of money (enough to live frugally on for a few months) and mentorship. The most important part of programs like YCombinator is the mentorship. You get access to seasoned investors, business people, and founders that help you realize your idea’s potential. The upside to bringing a company to fruition this way is that your startup costs are low, and you will know very quickly if you can become profitable. During the 1st bubble, anybody with an idea and a web page could get millions in funding. No market validation required, just an idea. This time around, you actually need to have a plan. You need to have traction. You need to be profitable. Sure, some companies are getting over valued (*cough* Facebook *cough*), but that happens whether we’re in a bubble or not.
The important thing to take away from this is to look at what companies are getting serious funding (>$500k) and what companies are making nice (fat) exits. Are they good companies? Would you use their product? Would someone you know use their product? Are they profitable? Do they have a user base? If you can answer “yes” to most of these questions, we probably aren’t in a bubble. We’re in something else. A new economy? An information economy? Well, we already have an information economy, so what now? We’re transforming the way we do business and interact with each other. Instead of doing things yourself, why not let somebody else do it for you? (hosting: Heroku). Keeping in contact with people is hard, why not let Facebook do it for you?
I’m not sure where all this is leading, but I’m fairly positive it’s not a bubble. It’s something different. It’s a transformation of our economy. To what, I don’t know. But it is changing, and it’s going to touch every single one of our lives sooner or later.
At my day job I’m really the only person that knows how to write WordPress plugins, so when I write one it’s usually sand-boxed on my machine where nobody can touch it. However, in a side endeavor I’m part of we have a team of 3 people developing on one plugin. As I’m the most experienced plugin developer amongst our team, I was tasked with coming up with a development style and plugin architecture that would work for us.
Everyone will be running a local copy of WordPress and making their changes to the plugin locally. The plugin itself will be under version control using Git, and developers will push/pull changes from either a self-hosted Git server or Git Hub. Database schema will be tracked in a file called schema.sql. When someone makes a change to the schema, it goes into that file at the bottom with a comment about why the schema changed. We’ll being jQuery as our Javascript framework of choice, and we’ll be writing all of our Javascript in CoffeeScript (see my previous entries).
The more difficult aspect of developing this plugin as a team is the sheer size of the plugin. Realistically this could probably be split into about 6 different plugins by functionality, but we want to keep everything together in one tidy package. To illustrate the architecture, I made a quick drawing.
The first layer of the plugin is essentially a wrapper. It initializes the ORM that we are using to access the database (we are using a separate database for this plugin’s data), and includes the wrapper class. The wrapper class is where developers drop their sub-plugin include file and instantiate it’s main object. For instance, for each sub plugin there will probably be two classes instantiated in the wrapper. One being admin related functionality, and the other being for front-end display functionality. My thinking with this architecture was that we could all work on separate sub-plugins without crossing paths too frequently. This also allows us to separate the different functionality areas of the plugin in a logical manner. The other benefit to architecting the plugin like this is that it will be very easy to port to a different architecture in the future. I’m well aware that WordPress probably isn’t the best tool for the job, but it is the best tool for the team with the deadline that we have.
While thinking about WordPress Plugin Architecture, I cruised the source code of a lot of plugins and it seems that everyone goes about it in a different way. If you’ve ever developed a large-scale plugin with a team, how did you go about doing it? Did you run in to any problems that you didn’t foresee at the beginning of the process?