Distributed VCS's are the Great Enablers (or: don't fear the repo)

Posted by Unknown Minggu, 02 Desember 2007 0 komentar

The more I play with the new breed of VCS tools, the more I appreciate them. The older generations (CVS, SVN) look increasingly archaic, supporting a computing and development model that seems unsustainable. Yet most of us lived with those tools, or something similar, for most of our development-focused lives.



When I speak of the new breed, the two standouts (to me) are Git and Mercurial. There are some other interesting ones, particularly Darcs, but Git and Mercurial seem to have the most steam and seem fairly grounded and stable. Between those two, I still find myself preferring Git. I’ve had some nasty webs to untangle and Git has provided me with the best resources to untangle them.



Those webs are actually all related to CVS and some messed up trunks and branches. Some of the code lives on in CVS, but thanks to Git, sorting out the mess and/or bringing in a huge amount of new work (done outside of version control because no one likes branching in CVS and is afraid of ‘breaking the build’) was far less traumatic than usual.



One of those messes could have been avoided had we been using Git as a company (which is planned). One of the great things these tools provide is the ability to easily do speculative development. Branching and merging is so easy. And most of those branches are private. One big problem we have with CVS is what to name a branch: how to make the name unique, informative, and communicative to others. And then we have to tag its beginnings, its breaking off points, its merge points, etc, just in case something goes wrong (or even right, in the case of multiple merges). All of those tags end up in the big cloud: long, stuffy, confusing names that outlive their usefulness. It’s one thing to deal with all of this for an important branch that everyone agrees is important. It’s another to go through all of this just for a couple of days or weeks of personal work. So no one does it. And big chunks of work are just done dangerously - nothing checked in for days at a time. And what if that big chunk of work turned out to be a failed experiment? Maybe there are a couple of good ideas in that work, and it might be worth referring to later, so maybe now one makes a branch and does a single gigantic check-in, just so that there’s a record somewhere. But now, one can’t easily untangle a couple of good ideas from the majority of failed-experiment code. “Oh!” they’ll say in the future, “I had that problem solved! It’s just all tangled up in the soft-link-experimental-branch in one big check in and I didn’t have the time to sort it out!”



I speak from personal experience on that last one. I’m still kicking myself over that scenario. The whole problem turned out to be bigger than expected, and now there’s just a big blob of crap, sitting in the CVS repository somewhere.



With a distributed VCS, I could have branched the moment that it looked like the problem was getting to be bigger than expected. Then I could keep committing in small chunks to my personal branch until I realized the experiment failed. With smaller check-ins, navigating the history to cherry-pick the couple of good usable ideas out would have been much easier, even if everything else was dicarded. I wouldn’t have to worry about ‘breaking the build’ or worry about a good name for my branch since everyone else would end up seeing it. I could manage it all myself.



This is the speculative development benefit that alone makes these tools great. It’s so easy to branch, MERGE, rebase, etc. And it can all be done without impacting anyone else.



One thing that I often hear when I start advocating distributed VCS’s is “well, I like having a central repository that I can always get to” or “is always backed up” or “is the known master copy.” There’s nothing inherant in distributed VCS’s that prevents you from having that. You can totally have a model similar to SVN/CVS in regards to a central repository with a mixture of read-only and read/write access. But unlike CVS (or SVN), what you publish out of that repository is basically the same thing that you have in a local clone. No repository is more special than any other, but that policy makes it so. You can say “all of our company’s main code is on server X under path /pub/scm/…”.



And unlike CVS (or SVN), really wild development can be done totally away from that central collection. A small team can share repositories amongst themselves, and then one person can push the changes in to the central place. Or the team may publish their repository at a new location for someone else to review and integrate. Since they all stem from the same source, comparisons and merges should all still work, even though the repositories are separate.



Imagine this in a company that has hired a new developer. Perhaps during their first three months (a typical probationary period), they do not get write access to the core repositories. With a distributed VCS, they can clone the project(s) on which they’re assigned, do their work, and then publish their results by telling their supervisor “hey, look at my changes, you can read them here …” where here may be an HTTP or just a file system path. Their supervisor can then conduct code reviews on the new guys work and make suggestions or push in changes of his own. When the new developers code is approved, the supervisor or some other higher developer is repsonsible for doing the merge. It’s all still tracked, all under version control, but the source is protected from any new-guy mistakes, and the new-guy doesn’t have to feel pressure about committing changes to a large code-base which he doesn’t yet fully grasp.



But perhaps the most killer feature of these tools is how easy it is to put anything under revision management. I sometimes have scripts that I start writing to do a small job, typically some kind of data transformation. Sometimes those scripts get changed a lot over the course of some small project, which is typically OK: they’re only going to be used once, right?



This past week, I found myself having to track down one such set of scripts again because some files had gotten overridden with new files based on WAY old formats of the data. Basically I needed to find my old transformations and run them again. Fortunately, I still had the scripts. But they didn’t work 100%, and as I looked at the code I remembered one small difference that 5% of the old old files had. Well, I didn’t remember the difference, I just remembered that they had a minor difference and I had adjusted the script appropriately to finish up that final small set of files. But now, I didn’t have the script that worked against the other 95%. When I did the work initially, it was done in such a time that I was probably using my editors UNDO/REDO buffer to move between differences if needed.



Now if I had just gone in to the directory with the scripts and done a git init; git add .; git commit sequence, I would probably have the minor differences right there. But I didn’t know such tools were available at the time. So now I had to rewrite things. This time, I put the scripts and data files under git’s control so that I had easy reference to the before and after stages of the data files, just in case this scenario ever happened again.



I didn’t have to think of a good place to put these things in our CVS repo. I just made the repository for myself and worried about where to put it for future access later. With CVS/SVN, you have to think about this up front. And when it’s just a personal little project or a personal couple of scripts, it hardly seems worth it, even if you may want some kind of history.



Actually, that is the killer feature! By making everything local, you can just do it: make a repository, make a branch, make a radical change, take a chance! If it’s worth sharing, you can think about how to do that when the time is right. With the forced-central/always-on repository structure of CVS and SVN, you have to think about those things ahead of time: where to import this code, what should I name this branch so it doesn’t interfere with others, how can I save this very experimental work safely so I can come back to it later without impacting others, is this work big enough to merit the headaches of maintaining a branch, can I commit this change and not break the build….?



As such, those systems punish speculation. I notice this behavior in myself and in my colleages: it’s preferred to just work for two weeks on something critical with no backup solution, no ability to share, no ability to backtrack, etc, than it is do deal with CVS. I once lost three days worth of work due to working like this - and it was on a project that no one else was working on or depending on! I was just doing a lot of work simultaneously and never felt comfortable committing it to CVS. And then one day, I accidentally wiped out a parent directory and lost everything.



Now, in a distributed VCS, I could have been committing and committing and could have lost everything anyways since the local repository is contained there: but I could have made my own “central” repository on my development machine or on the network to which I could push from time to time. I would have lost a lot less.



There are so many good reasons to try one of these new tools out. But I think the most important one comes down to this: just get it out of your head. Just commit the changes. Just start a local repository. Don’t create undue stress and open loops in your head about what, where, or when to import or commit something. Don’t start making copies of ‘index.html’ as ‘index1.html’, ‘index2.html’, index1-older.html’ ‘old/index.html’, ‘older/index.html’ and hope that you’ll remember their relationships to each other in the future. Just do your work, commit the changes, get that stress out of your head. Share the changes when you’re ready.



It’s a much better way of working, even if it’s only for yourself.


Baca Selengkapnya ....

Broken Bulb

Posted by Unknown Selasa, 20 November 2007 0 komentar
With apologies to Johnny Cash: "Flash, I hate every inch of you."

Baca Selengkapnya ....

Falling for Git

Posted by Unknown Rabu, 07 November 2007 0 komentar

You know what? Git must have come a long way in the last year. I keep reading that Git is hard to learn, has rough documentation, etc. But it’s really been quite nice in comparison to many things.



It’s especially nice once you quickly learn that the HTML man pages for Git follow a simple pattern (as I guess many online man page collections must). Just change the end of the URL from git-cvsimport.html to git-push.html or git-pull.html to look up documentation.



That I’ve been able to play around with Git quite successfully and easily just makes my frustration with some Python tools (like easy_install and zc.buildout, particularly its recipes) even more …. frustrating.



And, I’ve totally fallen in love with Git. Yes, I know there are alternatives written in Python that are quite comparable. But Git’s actually been easier to install and figure out (particularly the CVS interaction that I must currently suffer). And people who know me know that I’m no “Kernel monkey”. I’m really impressed with Git’s implementation and general behavior. Very impressed with the implementation.



By the way: if you’re having to work two ways with a CVS repository, this post has been absolutely invaluable. This collection of Git articles has been invaluable in getting some good defaults established, and some tips for building on Mac OS X (with a nice tip to download and untar the man pages directly instead of trying to build them with the asciidoc tool and its terrible dependency on troublesome XML libraries. Goddamn, how I hate XML).


Baca Selengkapnya ....

AppleScripting Across the Universe

Posted by Unknown Kamis, 01 November 2007 0 komentar

After a long day at work, I wrote a long message in Basecamp about what I had accomplished, how to access it, etc. But I forgot to submit the message! Crud. I wanted to send it out before morning and didn’t want to go into the office. I couldn’t get any screen sharing connection to go between the machines. I just had a handful of SSH leaps.



AppleScript to the rescue!



This is probably the most AppleScript that I’ve ever written. Fortunately, Safari supports the command do JavaScript ... in tab. After some floundering around with a similar setup on my local machine, I finally figured out AppleScript’s interesting reference notation and was able to ferret out the window and tab containing the unsent message, add some text to the message’s textarea element, submit the form, and return the extended value.



tell application "Safari"
set message_tab to current tab of window named "Web site > New message"

set extended to ".... Fun fact - i wrote this before i left the office and forgot to submit it. as a result, i now know how to submit forms like this via AppleScript."
set post_body_value to "$('post_body').value"
set extend_value to post_body_value & " += '" & extended & "';"

do JavaScript extend_value in message_tab
set body_value to do JavaScript post_body_value in message_tab

do JavaScript "document.forms[0].submit();" in message_tab

return body_value
end tell


I pasted the above code into VIM and ran it with the command line osascript command. Worked like a champ.



And because sleep is for the weak, I decided to track down how to do the equivalent in Python. Mac OS X 10.5 provides a “Scripting Bridge” for Python and Ruby (and potentially others), which causes many frameworks and other objects to be dynamically exposed. Without the need (for better or worse) of yet-another-virtual-machine. Anyways, I cobbled the following together:



from Foundation import *
from ScriptingBridge import *
safari = SBApplication.applicationWithBundleIdentifier_('com.apple.Safari')

def find_window_named(name):
for win in safari.windows():
if win.name() == name:
return win

window = find_window_named("Web site > New message")
message_tab = window.currentTab()

print safari.doJavaScript_in_("$('post_body').value", message_tab)
safari.doJavaScript_in_("document.forms[0].submit()", message_tab)


There may be a better way to do the find_window_named method, but I didn’t have the time to track it down. As it was, I was able to do do the above by playing around with everybody’s favorite Python tool, dir(), which verified my suspicion that many of the commands exposed to AppleScript were also available via the Scripting Bridge. This is evidenced by the currentTab() method of a Safari window, analogous to the current tab of window ... AppleScript. And I imagine most of these are just Objective C methods. And since AppleScript editor’s Dictionary browser told me about the do Javascript [v] in tab [t] command, it stood to reason that it would exist on the Safari object. It was there when I did pprint(dir(safari)), and I knew that I’d need to pass in a Tab object.



In any case, it’s awesome that Apple has embraced Python and Ruby and has tied them in to the Cocoa runtime. Historical note: the first Python - Objective C bindings that I know of where commissioned by a NeXT Developer who wanted to use Python and Bobo (zope.publisher) to do web work with NeXT’s Enterprise Objects Framework, without the weight and cost of WebObjects. I think that means that Python was bridged into the Objective C runtime and NeXTStep frameworks before Jython ever got going. I believe that work was done by the developer who later released Objective Everything which bridged into Perl and TCL as well as Python.



Of course, traditional MacPython from the classic Mac OS was also natively tied in to the AppleScript of that era; AppleScript has always supported other dialects (FrontierScript was a common one).



But it’s nice now to see support coming out of both Apple and Microsoft (and Sun too, I guess) for these languages. The above scripting of Safari was surprisingly easy. As was an earlier experiment to fish around my calendar store for incomplete To-Do items. Quite nice.



But what’s especially nice is that I was able to SSH into my office Mac and tell Safari to submit that form that I had neglected earlier.


Baca Selengkapnya ....

Java Crybabies

Posted by Unknown Senin, 29 Oktober 2007 0 komentar

So Mac OS X 10.5 (Leopard) doesn’t ship with Java 6. And now Java people are all sad and mad and yelling at Apple for dropping the ball on this.



Why should Apple go out of their way to provide Java 6? After the aborted Java - Objective C bridge experiments, what else is there to do? “Native” Java applications have still never come close to feeling like a native or near native piece of the operating system. Why should Apple keep throwing engineering efforts at this system?



And whatever happened to OpenStep for Solaris by the way?



Honestly, Apple has a dynamic object runtime environment heavily tied to a C based language already. It’s done much of what Java and the .Net framework are now doing, and has been doing that since the latter half of the eighties. It’s interesting to look back to the mid nineties and at the criticisms of NeXTStep/OpenStep. “Why are you using this Objective C thing? Why not C++?”



Because what NeXT understood that others didn’t was that the runtime is what’s important. NeXTStep came closest to providing a Smalltalk-style runtime of dynamic collaborating objects without being the alien and self contained environment that Smalltalk can often be. Initially tied in to a Unix operating system, it later went multi-platform (at least to an extent - I think FoundationKit, Enterprise Objects Framework, WebObjects, and PDO (Portable Distributed Objects) ran on HP-UX and Solaris, while all of those plus AppKit ran on NT, along with D’OLE (distributing COM/OLE while Microsoft was still struggling to provide DCOM)).



In those same mid nineties, there were other attempts to provide some of the power of the NeXTStep / OpenStep platform. IBM perhaps came closes with their CORBA (Corba 1.x) based SOM, which was also to be at the heart of OpenDoc. SOM was the heart of the fascinating OS/2 2.0 and Warp. It also was used in the classic Mac OS. Of course, it was there in OpenDoc’s brief life. But beyond that, SOM was used to provide contextual menu plug-ins, interestingly enough. But it was still fairly heavy, as CORBA could be. Too much wringing and wrangling to help non-dynamic languages function in a semi-dynamic world.



And then there was Microsoft’s Cairo. Never shipped. Some of its technologies found their way into NT… But the big features? One of the big features was to be an “object oriented file system”. This resurfaced as WinFS for Vista. Twelve-plus years later, and it’s still not done.



And of course, there was Taligent. Initially, Taligent was going to be an all-new Operating System, aggressively object oriented, etc. Apple and IBM together, to make a NeXTStep for the rest of us, perhaps? Except instead of a dynamic language, they decided to go for C++. But they apparently had to pump in a lot of work to overhaul the C++ runtime and try to provide some of the dynamic loading options (of Smalltalk, NeXTStep/Objective C, etc). It was a lot of time wasted, I’m sure. And they eventually had to pull back from the all-new operating system plan. That was probably wise, considering the environment of the time. Apple never was able to complete Copland and Gershwin, and Microsoft never got Cairo finished; BeOS never found a substantial market; and even early NeXT-era Apple wasn’t able to sell the idea of a NeXTStep based Mac OS until they provided the Carbon migration path for the classic Mac APIs.



So Taligent shifted to providing, like OpenStep, differing layers that would provide these object features on top of differing host operating systems. Still never happened. Which is a bit of a bummer - they had some paradigms that would have been interesting to see.



In 1994, Jon Udell wrote a short article titled “A Taligent Update,” subtitled Will systemwide object frameworks reinvent programming?



Well, while Taligent never delivered; and OpenStep faded into WebObjects (providing the OpenStep developer tools on NT, NeXTStep Mach, and Mac OS X Server 1.x, aka Rhapsody); this seems to have actually, finally, come to pass. Cocoa is a killer framework for Mac OS X, with many fans. There are bridges to many other languages (Ruby, Perl, Python, among others). It’s not quite the pervasive system-wide framework that it was in the NeXTStep days, but in Mac OS X Leopard Cocoa looks as though it’s reclaiming its position as king of the hill. (For a while, there were many Carbon APIs that were a bitch to use from Cocoa; or at least those used to the comparative ease of Cocoa programming).



And Microsoft’s .NET framework has delivered similar in the Windows market. Of course, it doesn’t have anything like Interface Builder; but it still seems to have a much better share-and-reuse model than anything that’s come before it in Windows programming. And it’s built on: dynamic object systems. Unlike Java (which I’ll get to in a minute), the .NET framework and core languages (C#) appear to be taking cues from Objective C and purely dynamic languages/systems like Smalltalk: dynamic class extension, for example, is a new feature in C# 3.0. It’s also been possible in Objective C. This can be a dangerous feature; but also quite useful and usable. But it’s nice to see this in languages and systems that try to combine C, which has the benefits of familiarity, with the power of dynamic object-filled worlds.



And it’s much better than the heavy and strained world of COM and CORBA.



So anyways - NeXT, and now Apple, has been ahead of this game for quite some time. Granted, if it weren’t for Apple, NeXT would be another blip like Taligent. Except with a shipping product. But still - they survived. And their system wide dynamic object framework idea seems to have been vindicated.



So what of Java?



Java is the bastard child here. I’ve never been comfortable with it. It’s not cross platform: Java IS the platform. And it’s awkward. Even in its best desktop guise - Eclipse - it’s still a foreign environment on Mac OS X. Even Firefox is starting to feel more natural (and Firefox 3 looks to be trying even harder in this area). Why would we want it? Sun has never seemed to care that much about Java on Macs, except to try to showcase their “see, multi-platform!” message. But Windows has always gotten the lions share of the attention, even though Microsoft long ago stopped caring about Java.



On a side note, we’re seeing the same thing happen now with Flash. Ugh. It’s a memory hog on Mac OS X. Makes poor use of
resources. If Adobe and Sun don’t seem to care enough about providing a truly killer Mac experience, is it any wonder that they’re being kicked off of the island? Apple’s got the dynamic object system language and frameworks (Objective C, Cocoa); and it’s got an increasingly impressive web environment (WebKit, with Canvas support); it has bridges into and out of AppleScript, Python, Ruby, and Perl, all included with Leopard - bridges too!



One just has to look at how huge the “Java in a Nutshell” books have become to know that Java is no small undertaking. And again, I think that Apple probably has stopped caring. They’ve tried to be good Java citizens - from the Java - Objective C bridge to the all-Java implementation of WebObjects; they’ve tried to be make the Cocoa framework appealing to Java developers and probably tried to make Java appealing to Cocoa developers. But it must not have ever happened. It’s all deprecated now.



And I don’t know what’s going on, really. I just see Apple as having other priorities. It’s not like they’re (purely) a not-invented-here company. They have, after all, built in Sun’s DTrace technology. And Apple builds on and gives back to Open Source, with projects like launchd, the new calendar server, bridge support, WebKit, etc.



But Java…? Apple has no real stake in it any more. The last Java application I ran was Eclipse, months ago, just trying to see what life in a fancy IDE would be like. It was disappointing. Desktop Java just doesn’t figure into a Mac user’s life all that much. I’d rather see Apple focus on improving their primary object language (which inspired Java), focus on improving their APIs and offering more features for programmers (all done handily in Mac OS X 10.5 - from a programmer’s standpoint, it’s extremely impressive) and providing a smooth, fast, and natural platform experience (again done handily in Mac OS X 10.5 - see the still-unequaled Interface Builder 3.0; see CoreAnimation; see a sea of new UI object offerings from Apple).



Why would they spend their time fighting uphill to support a platform whose chief aim is to be an anti-platform? At best, desktop Java on the Mac could mean “runs a windows-like-application-almost-as-good-as-windows, maybe.” Well, we now have Boot Camp and Parallels/VMWare for that. Apple wants to provide a killer alternative platform, and they’ve learned that the best way to do that is to be in control. When Mac browsers were suffering - IE on Mac OS X was slow and strange (compared to near excellent behavior in OS 9), Camino and Firefox were big and slow and non-native (Camino did a decent job, but it had different widget implementations for browser skin and in-page rendering), OmniWeb looked beautiful but its support for new HTML and Javascript was far behind, etc - Apple took control of their destiny by building Safari. And they built in on a toolkit that would let them plug in the right widgets and behavior for a native experience. Hence, all Safari users have long enjoyed having spell checking support in their TextAreas. We get that from Cocoa. As of Mac OS X 10.5, we can turn on grammar checking as well. And now, WebKit is leading much of the HTML 5 charge, recently announcing preliminary support for client side database storage, and were among the first (if not THE first) to put forward tags like canvas which may make SVG usable and make ultimately take care of many uses of Flash (remember when Flash was purely an animation tool?). Apple has all of this in their control. They don’t have to put up with lackluster players / viewers from Adobe.



And I think it’s become pretty clear that Apple’s preferred solution at this time for rich cross-platform(ish) UI and Code is - the web. HTML, CSS, Javascript. It powers Dashboard; it’s been a big selling point of the iPhone (although it is admittedly laughable that Apple said “Web 2.0 apps are your iPhone API!”, but it’s still impressive; people have built some very impressive apps with that very system).



Why does AJAX / DHTML succeed where Applets have failed? And even Flash does poorly? It’s not just because it’s “everywhere”, without a need to install and deal with a JRE. It’s because AJAX is part of the web page. It’s not a self contained rectangle that can do really cool things - within the realm of that rectangle. It’s because AJAX leverages the browser’s toolkit, so that a text field in IE behaves like a text field in IE; and a text field on a Mac behaves like a text field on a Mac. Java performance has never been that great on the Mac. Which must mean that no one cared enough to really try to make it shine, or that the technology is really heavy and inferior. I remember groaning every time I used to check the snow report page at one of the ski resorts because of the extra amount of time taken to load up Java and all that accompanies it just to render some scrolling-headlines; scrolling headlines whose text didn’t match (or even anti-alias with) the surrounding text. It’s absolutely useless and pointless. Granted - I have seen some impressive Java applets in the science arenas, and IBM had some cool chess ones in their old Deep Blue v Kasparov (?) challenge. But again, those have been few and far between.



The two desktop Java apps that I’ve used heavily at points in the past, besides my experiments with Eclipse, were with a UML tool, and then LimeWire. Both of these used on Macs. The UML tool was tolerable, but barely. Limewire was terribly slow - click and wait instead of click and point. Desktop Java is dead to me. I don’t know why you’d want to write in it. Cocoa is just an all out balls out better environment, especially for UI programming as (again) Interface Builder remains peerless. And .NET, especially with the Mono implementation, is a far more interesting playground that seems keen on taking in new dynamic, declarative, and functional features, (LINQ, F#, etc), while Java just feels like a big stack of alphabet soup and static typing and not much else. (Although I do understand that Java 6 has started to break this mold).



And as far as languages go: please. Python, Perl, and Ruby alone offer better cross platform capabilities than Java: especially on the server side where Java is supposed to shine. I didn’t have to wait for Apple’s blessing to use Python 2.5 on my desktop Mac. I didn’t have to wait for Apple’s blessing to draw native widgets with it either. I didn’t have to wait for Apple’s blessing to use Python 1.x to control other applications across the scripting bridges of Mac OS 7, 8, and 9.



Now that Java is Open Source (it is, isn’t it?), maybe the Java community can look into what it would take to provide a good Java experience on Mac OS X. I think that is its only hope. It has to become leaner. The dynamic “scripting” language crowd have all been able to find ways to take advantage of different platforms. Why is it on Apple’s head to provide Mac Java? Going back to what I asked earlier - is Sun going to try to get OpenStep going again on Solaris? Is Microsoft going to provide Java 6 for Windows? Is Apple going to provide .NET 3.5 for the Mac?



A response I’ve seen in the Java community is that Apple is arrogant for not shipping Java 6 with Leopard, and for withdrawing any development downloads and many topics related to Java on the Mac. I think that it’s arrogant of the Java community to think that they matter enough for Apple to continue to sink engineering resources into the platform. They’ve sunk a lot in over the years, and there’s never been a huge payoff. I see no reason for them to continue. They have far better alternatives.



IBM or Sun or anyone else out in the Java / Open Source community should take it upon themselves to provide a good platform if they really care. They’re obviously doing it for Linux and Windows.



Sorry, this is a long and rambling post and now it’s quite late at night when I swore I would be going to bed early. But seriously - Java has not mattered to me as a developer or Mac user for years. It’s a dead weight for Apple. Support for Mac OS “Classic” got the boot in Leopard, and even Carbon is looking like its days are numbered. If those two can be cut off, what the hell chance would Java have?


Baca Selengkapnya ....

Catching Up

Posted by Unknown Jumat, 12 Oktober 2007 0 komentar

These periods between posts keep getting longer, don’t they?



I’ve got nothing earth-shattering to talk about. Work’s been very busy, and we continue to be served well by Zope 3. I’m still royally confused by things like setuptools and eggs, mostly in regards to how they work in a Zope 3 world when you’ve already got long entrenched ways of doing software. I could not get a good answer from anyone I asked (in fact, I often got wildly competing opinions). So I’m sticking with our internal make-rake-like-ish toolkit which is primarily helpful for automating checkouts from internal and external repositories. I did have some success with zc.buildout, but I don’t yet foresee a time when I can use it to deploy whole sites/applications. I can barely see a time when I can use it on anything but small projects that are relatively stand-alone. There’s just a big gap between The Way Things Have Been Done and The Way That It Seems That Maybe Things Should Be Done In The Future.



Of course, neither setuptools nor zc.buildout seem to have “proper” releases. zc.buildout is in an endless 1.0 beta (beta-30 at this point), and setuptools is at 0.6c7. Does that mean that it’s not even at release 0.6 quality yet? None of this instills confidence in this hurried developer.



The big problem is the legacy code, which is in CVS. Some of it is being extracted out into individual packages that have the proper ‘setup.py’, ‘buildout.cfg’, etc. Finally. But I have no idea how to apply it to the bigger picture, and I’ve found very little written words that target our situation.



The biggest downside of being so busy with customer related work is that it’s very difficult to keep up with discussions, conversations, plans, etc. And I’m sure that my frustrations with lack of documentation, seemingly unfinished releases, and so on, are really the fruit of other hurried developers. I admire them for at least releasing something. It’s more than I’ve done in a long time. It’s more than I see myself being able to do for quite some time.



Anyways, the revolving door of Javascript toolkits keeps turning. I’m now deeply enamored with jQuery. “Write less, do more”. I like it. I like that it doesn’t trample all over Javascript, and thus plays well with others (especially others that play well with others, like MochiKit). MochiKit is just so big… I think I might make a stab at writing, at least for internal use, a lightweight version that brings many of its best concepts out without overlapping jQuery’s functionality. MochiKit brings many wonderful Python-ic functions and tools to the Javascript table that make general development much easier.



I’m also deeply enamored with zc.resourcelibrary which is a Zope 3 add-on that makes it much easier to manage javascript and CSS resources and their relations to each other. Among other things, it helps save resources when they’re not needed. For example::



if len(rendered_boxes) <= 3:
return self.just_render_the_damn_boxes(rendered_boxes)
else:
zc.resourcelibrary.need('fancy.scrolling.library')
return self.render_the_advanced_widget(rendered_boxes)


I’ve also adjusted my coding style, returning to the underscore_separated_words style instead of the camelCasedWords style, at least for functions, attributes, and methods. This is closer in style to PEP 8 (the main style guide for Python code). The Zope style guide differs on this point, using camelCased instead. And PEP 8 does say that it’s OK, if not downright preferred, to stay true to the style around you.



But one thing I learned from looking through Rails code was that the underscore_style was easier to read, since the underscore acts like a space. And I’ve become a big fan of writing code that communicates intent; that reads like a story (somewhat). Extract Method is your friend. I’ve grown very distrustful of excessive nesting, or of having very long bodies inside of a ‘for’ or ‘if’ block.



That’s about it. Hell of an update, huh? Well, work’s really started to become work, and is quite enjoyable. I’ve got a good flow going and don’t feel I have as much need (nor place) to be an advocate or crank. As I’ve mentioned before, we’ve gotten incredible levels of code re-use by building our internal libraries and applications on top of Zope 3, and we’ve been able to grow them so much that they’re really the first level of framework. It was such a struggle to do this in Zope 2, but in Zope 3 it does fall (fairly) neatly into place. Nothing else in the Python web-framework-whatsit world comes close.



The only toolkit that’s even better? SQLAlchemy. It’s pretty much the only way I’ll interact with RDBMS systems in Python from this point out. And I don’t mean I’ll be writing every RDBMS interaction as an object-relational mapping. SQLAlchemy is great because it provides a good connection / pooling infrastructure; a good Pythonic query building infrastructure; and then a good ORM infrastructure that is capable of complex queries and mappings (as well as some pretty stone-simple ones).


Baca Selengkapnya ....

Numbers

Posted by Unknown Kamis, 16 Agustus 2007 0 komentar

I don’t know when I last used a spreadsheet for its actual spreadsheet capabilities, on a sheet I designed myself. I think it may go back to AppleWorks (on the Apple II)! Sure, I’ve used sheets like time cards and travel requests that others had made where I just had to fill in the holes. And I’ve received more than my fair share of spreadsheets used like an outliner / lightweight database / structured note list. But I don’t remember how long it’s been since I used a spreadsheet to figure out a budget, to track expenses, or any other dumb mundane thing like that. Until last night.



I was getting ready to pay my mid-month bills, and I was trying to figure out how much I could pay towards one of my credit cards and still have enough cash to cover the expenses remaining in the month. I also realized that I’ve been spending quite a bit at the iTunes Music Store and hadn’t been tracking any of it. I decided that this would be an excellent time to try out Apple’s new spreadsheet application, Numbers. I found a downloadable time trial of Apple’s iWork ‘08 suite, and immediately got to work.



Numbers is pretty damn cool. I don’t know if there are other spreadsheets that behave like this, but in modern times, it seems so obvious: instead of having the big set of cells in one large table, you work in small floating spreadsheets / tables. This is a big deal for so many reasons, with the most obvious being layout. Another great reason is that each table/spreadsheet can be more focused on its job. Already, Numbers felt a lot more intuitive than anything I had used in a long time.



When doing my simple rest-of-the-month budget, my main question was “how much can I pay on this card and still have enough cash on hand for the rest of the month?” Numbers made it easy with its slider option. For just this one cell, I was able to quickly configure it to give me a slider with a range of -700 to -500. When I got the rest of the budget entered, I could then play with the slider and watch its impact on the total-leftover cell. In previous months, I’ve generally done this calculation in my head, or compared where I stood the prior month after paying this particular bill. It was much nicer to whip up a simple spreadsheet where I could make this one particular number interactive and see the results immediately.



So I was able to get a couple of simple but nice looking spreadsheets together quickly that gave me actual data. I could easily play with this data, or just be embarrassed by it (I have spent quite a bit on the iTunes Music Store).



There are still a lot of old-style spreadsheet rules in play, at least in formulas and the like. That’s made a bit easier by being able to use header names (ie, =SUM(Total) or =MINA(Date Purchased)). I think it was Lotus’ Improv, which first appeared on NeXTStep, that worked this way. In fact, I think with Improv, it was the only way you could work: there were no A/B/C/D columns or rows. This was part of a cool feature of Improv wherein you could drag and drop header representations and regroup the data visually without impact on the calculations. I still think that was one of the most forward-thinking spreadsheet applications. But, it’s gone. I believe there’s some open source variation on the idea, possibly written just for GNUStep…?



Still, Numbers is pretty decent. I love the free-floating tables. It does make it much easier to compose complex spreadsheet pages out of multiple tables and data types. It’s pretty easy to refer to other tables as well. And it’s nice to have non-tabular data (text, graphics, etc) floating free from those numbers, making it easier to adjust layouts without impacting cells.



I’m impressed enough that I’m quite likely to buy iWork ‘08, just for Numbers alone. I have a small need for Pages and almost no need for Keynote, but I do find myself needing to get on top of my finances and similar data. Numbers is the first tool I’ve encountered that I think will let me handle my odd needs without requiring a degree or summer course in spreadsheets.


Baca Selengkapnya ....

Yahoo TV Is Now Useless

Posted by Unknown Rabu, 18 Juli 2007 0 komentar

So, I used to use “My Excite” as my little personal portal. It had good TV listings, which is always important. But many years ago, as Excite was was burying its content under more and more ads, I switched to “My Yahoo!”. Which was even better. Great TV listings. So even now, when I don’t have cable or satellite, I found it valuable.



But then they changed. They upgraded to super fancy TV listings full of AJAX-y action.



But do you know what sucks?



THEY CAN’T GET TIMEZONES RIGHT!



It’s been quite a few months now, and they still can’t get my timezone right.



For seven or eight years, I had no problem with localized listings. Although for a few of those years I was in the blessed Eastern Time Zone. But even when I moved out west - no problem.



But now, apparently Yahoo! has invested all of their resources into flashy features but they can’t tell me when a tv show is on according to the time in my area.



Absolutely useless.



For a while, I’ve gotten by with using the TV listings on the “My Yahoo!” page. It didn’t have all of the fancy features, so somehow it managed to get the times right. But as of today, I’m told that I have to use the new fancy-ass “My Yahoo! Beta”.



First thing that happens when I visit the My Yahoo! beta?



We don’t support Safari….”



Goddammit. Many of their other upgraded YUI based sites work fine in Safari. I can understand them saying this months ago, which was the last time I looked at the new beta. But to still be having an issue? While trying to be a shiny partner of the lovely iPhone? What’s up?



Anyways, I clicked the “live on the edge” button to see the new page. There were the TV listings. In the wrong time zone!



I submitted an email months ago about this and got the “yeah, we know, we’re working on this” response.



I know that time zones are a bitch to work with, but come on: this shit has worked for years. And now it’s been how long since the launch of the fancy new Yahoo TV section? Six months? Seven? Surely someone could have worked this out by now.



I’m so disappointed. Yahoo! was always one of the most reliable web sites. And I appreciate what they’ve given to the developer community with YUI! and other tools. But this little TV listing issue just takes the cake. It makes it absolutely useless, and now “My Yahoo!”, which has been my ‘home page’ for years, is all but useless as this became such a valued resource as their other sources stopped working.



Very frustrating.



And I’m still looking for a good TV listing site. But all I’ve come up with, so far, is pretty much bullshit. Wrong channels, inability to properly remember channels, too many ads, hard to access listings, slow display…. Augh.



Pissed.


Baca Selengkapnya ....

Traits / Roles as Alternative to Abstract Base Classes

Posted by Unknown Selasa, 08 Mei 2007 0 komentar

While digging through the Python-3000 development list archives, trying to figure out the state of thought circling PEP’s 3119 and 3124, I came across this gem:



“Traits/roles instead of ABCs”, by Collin Winter.



With ABCs refererring to Abstract Base Classes (pep 3119).



Winter’s proposal is similar to my recent post, which is that this sort of “capability inference” should be dynamic, and not bound to the rigid nature of the class hierarchy. In my post on this subject, I showed a number of different implementations of a single interface (specification, role, whatever) - only one implementation followed the basic class-instance scenario. All others provided the exact same outward appearance, while internally they were implemented as module-level functions, class or static methods, or a dynamically composed single-use object (a brainless instance was made and had methods dynamically attached).



Winter’s roles/traits system, which refers to roles in Perl 6 and traits in Squeak, is along the same lines. I hope to hell it gains traction.


Baca Selengkapnya ....

ABC may be easy as 123, but it can't beat zope.interface

Posted by Unknown Jumat, 04 Mei 2007 0 komentar

I guess the deadline may have come and gone for getting in PEPs for Python 3000. Guido’s already written up a PEP Parade.



Of particular interest to me has been the appearance of PEPs for Abstract Base Classes (PEP 3119) and the more exhaustive PEP 3124 which covers “Overloading, Generic Functions, Interfaces, and Adaptation.”



Both of these aim to provide ways of saying “this is file-ish”, “this is string-ish,” without requiring subclassing from a concrete “built-in” type/class. But I think they both fall short a little bit, while zope.interface (from the Zope 3 family) provides the best solution.



PEP 3119 (Abstract Base Classes) has a section covering comparisons to alternative techniques, and it specifically mentions “For now, I’ll leave it to proponents of Interfaces to explain why Interfaces are better.” So this is my brief attempt at explaining why.



A quote from PEP 3119 that I particularly like is “Like all other things in Python, these promises are in the nature of a gentlemen’s agreement…” The Interfaces as specified and used in Zope 3 and some other systems are the same way. They are not “bondange and discipline” Interfaces. They are not the ultra-rigid Eiffel contracts, nor are they the rigid and limited Interfaces as used by Java. They are basically a specification, and they can be used (as mentioned in PEP 3119) to provide additional metadata about a specification. There are some simple tools in zope.interface.verify to check an implementation against a specification, but those are often used in test suites; they’re not enforced hard by any system. The agreement might be “I need a seekable file”, which might mean it expects the methods/messages ‘read’, ‘seek’, and ‘tell’. If you only provide ‘read’ and ‘seek’, then it’s your fault for not living up to the agreement. That’s no different than the Python of today. What Interfaces and Abstract Base Classes aim to provide is a better clarification of what’s expected. Sometimes “file-like” in Python (today) means it just needs a ‘read’ method. Sometimes it means the full suite of file methods (read, readlines, seek, tell). Same thing with sequences: sometimes it just means “something iterable”. Other times it means “support append and extend and pop”.



Another side benefit of Interfaces as specification is that they provide a common language for, well, specifications. Many PEPs propose some sort of API, especially informational PEPs like WSGI (PEP 333) or API for Cryptographic Hash Functions (PEP 247). I’ll use PEP 247 as an example for my attempt at explaining why Zope 3’s Interfaces are Better.



A problem with Abstract Base Classes is this: they’re limited to classes. Even when PEP 3119 mentions Interfaces, it does so like this:




“Interfaces” in this context refers to a set of proposals for additional metadata elements attached to a class which are not part of the regular class hierarchy…




It then goes on to mention that such specifications (in some proposals and implementations) may be mutable; and then says that’s a problem since classes are shared state and one could mutate/violate intent. That’s a separate discussion that I’m not going to go into here.



What is important is this severely limited focus on classes. zope.interface works on objects as well, and not just normal ‘instances of a class’ object, but on classes themselves, and also modules.



There are two important verbs in zope.interface: implements and provides. provides is the most important one - it means that this Object, whatever that object may be, provides the specified interface directly.



implements is often used in class definitions. It means “instances of this class will provide the specified interface”. It can also be thought of in terms of Factories and/or Adaptation - “calling this object will give you something that provides the desired interface.”



“What does that matter?” you might ask. Well, there are all sorts of ways to compose objects in Python. A module is an object. It has members. A class is an object. An instance of a class is, of course, an object. Functions and methods are also objects in Python, but for the most part what we care about here are Modules, Classes, and Instances.



Because when it comes down to actual usage in code, it doesn’t particularly matter what an object is. In PEP 3124, the author (Phillip J Eby) shows the following interface:



class IStack(Interface):
@abstract
def push(self, ob)
"""Push 'ob' onto the stack"""

@abstract
def pop(self):
"""Pop a value and return it"""


Ignore the @abstract decorators, as they’re artifacts of the rest of his PEP and/or related to PEP 3119. What is important is the use of self.



“self” is an artifact of implementation that is invisible to use. Sure, you can write a Stack implementation like this. (Note: I’m going to use zope.interface terminology and style from here on out):



import zope.interface

class Stack(object):
zope.interface.implements(IStack)

def __init__(self):
self._stack = []

def push(self, ob):
self._stack.append(ob)

def pop(self):
return self._stack.pop()


But when it’s being used, it’s used like this:



def do_something_with_a_stack(stack):
stack.push(1)
stack.push(2)
# ...
top = stack.pop()

stack_instance = Stack()
IStack.providedBy(stack_instance)
# True
IStack.providedBy(Stack)
# False

do_something_with_a_stack(stack_instance)
# works fine
do_something_with_a_stack(Stack)
# raises an exception because `Stack.push(1)` is passing `1`
# to `self`.. unbound method, bla bla bla.


Notice that there is no ‘self’ reference visibly used when dealing with the IStack implementation. This is an extremely important detail. What are some other ways that we may provide the IStack interface.



One way is to do it with class methods and properties, effectively making a singleton. (This isn’t a good way to do it, and is just here as an example).



import zope.interface

class StackedClass(object):
zope.interface.classProvides(IStack)

_stack = []

@classmethod
def push(class_, ob):
class_._stack.append(obj)

@classmethod
def pop(class_):
return class_._stack.pop()

IStack.providedBy(StackedClass)
# True

do_something_with_a_stack(StackedClass)
# this time it works, because `StackedClass.push(1)` is a class method,
# and is passing `StackedClass` to the `class_` parameter, and `1`
# to `ob`.


Another variation of the above is using Static Methods:



import zope.interface

class StaticStack(object):
zope.interface.classProvides(IStack)

_stack = []

@staticmethod
def push(ob):
StaticStack._stack.append(ob)

@staticmethod
def pop():
return StaticStack._stack.pop()


Again, StaticStack.push(1) and StaticStack.pop() work fine. Now lets try a third way - in a module! Let’s call this module mstack (file - mstack.py)



import zope.interface

zope.interface.moduleProvides(IStack)

_stack = []

def push(ob):
_stack.push(ob)

def pop():
return _stack.pop()


Then in other code:



import mstack

IStack.providedBy(mstack)
# True
mstack.push(1)
mstack.push(2)

print mstack.pop()
# 2


So whether we’re dealing with the instance in the first example (stack_instance), the classes in the second two examples (StackedClass and StaticStack), or the module in the last example (mstack), they’re all objects that live up to the IStack agreement. So having self in the Interface is pointless. self is a binding detail.



Jim Fulton, the main author of zope.interface, taught me this a long time ago. Because in Zope 2, you could also make an IStack implementation using a Folder and a pair of Python scripts. Well, those Python scripts (as used in Zope 2 “through-the-web” development) have at least 4 binding arguments. Instead of ‘self’, the initial arguments are context, container, script, traverse_subpath. Just like self is automatically taken care of by the class-instance binding machinery, the four Zope Python Script binding arguments are automatically taken care of by Zope 2’s internal machinery. You never pass those arguments in directly, you just use it like push(ob) and pop().



So there it is - many ways to provide this simple “Stack” Interface. And I believe that both [PEP 3119] and [PEP 3124] are short sighted by focusing on the class-instance relationship exclusively (or so it appears).



And since many objects, particularly instances, are mutable, one could compose an IStack implementation on the fly.



class Prototype(object):
""" Can be anything... """

pstack = Prototype()
pstack._stack = []

def pstack_push(ob):
pstack._stack.append(ob)

def pstack_pop():
return pstack._stack.pop()
pstack.push = pstack_push
pstack.pop = pstack_pop

# Now we can say that this particular instance provides the IStack
# interface directly - has no impact on the `Prototype` class
zope.interface.directlyProvides(pstack, IStack)

pstack.push(1)
pstack.push(2)
print pstack.pop()
2

# We can remove support as well
del pstack.push
zope.interface.noLongerProvides(pstack, IStack)


Examples of dynamically constructed objects in the real world - a network services client, particularly one that’s in an overwraught distributed object system (CORBA, SOAP, and other things that make you cry in the night). Dynamic local ‘stub’ objects may be created at run time, but those could still be said to provide a certain interface.



So now let’s look at whether it matters that you’re dealing with a class or not:



@implementer(IStack)
def PStack():
pstack = Prototype()
pstack._stack = []

def pstack_push(ob):
pstack._stack.append(ob)

def pstack_pop():
return pstack._stack.pop()

pstack.push = pstack_push
pstack.pop = pstack_pop
zope.interface.directlyProvides(pstack, IStack)

return pstack

@implementer(IStack)
def StackFactory():
# Returns a new `Stack` instance from the earlier example
return Stack()

import mstack
import random

@implementer(IStack)
def RandomStatic():
# chooses between the two class based versions and module
return random.choice([StackedClass, StaticStack, mstack])


All three are factories that will return an object that provides an IStack implementation, which is exactly the same as the Stack class in the first example. That also claimed that it implements(IStack). When the class is instantiated / called, a new object is made that provides the IStack interface. In Python, another thing that doesn’t really matter is whether something is a class or function. All of the following lines of code yield a result that is the same to the consumer. The internal details of what is returned may vary, but the IStack interface works on all of them:



Stack()         # class
PStack() # 'Prototype' dynamically constructed object
StackFactory() # Wrapper around basic class
RandomStatic() # Chooses one of the class/static method implementations.


And whether we’re looking at the class implementation, or any of the factory based implementations, the result should be the same:



IStack.implementedBy(Stack) # class
# True
IStack.providedBy(Stack)
# False
IStack.providedBy(Stack())
# True

IStack.implementedBy(PStack) # Factory
# True
IStack.providedBy(PStack)
# False
IStack.providedBy(PStack())
# True


No matter which method of instantiation is used, they should all pass the verifyObject check, which checks to see whether all of the specified members are provided and that the method/function signatures match the specification



from zope.interface import verifyObject
verify_stack = partial(verifyObject, IStack)

all(verify_stack, [Stack(), PStack(), StackFactory(), RandomStatic()])
# True


Now the class-based options will fail on the implementedBy check, because it’s the Class that provides the implementation, not an instance like with Stack



IStack.implementedBy(StackedClass)
# False
IStack.providedBy(StackedClass)
# True
IStack.providedBy(StackedClass())
# False


“OK”, you might say, “but still, why does it matter? Why might we really care about whether these abstract specifications work only with classes? It seems smaller, simpler.”



The main advantage is that specification should (generally) make no assumptions about implementation. If the specification, aka “gentlemen’s agreement” is generally met, it shouldn’t matter whether it’s provided by a Class, an instance, a module, an extension module, or some dynamically constructed object. The specification language should be the same



Going back to PEP 247, the “cryptographic hash API”: there is a specification in that module about what the ‘module’ must provide, and for what the hash objects must provide. Consider also the WSGI spec, the DB-API specs, and all of the other formal and informal specs that are floating around just in the PEPs. Using zope.interface, those specifications can be spelled out in the same fashion. WSGI just cares about a particular function name signature. It can be provided by a single function in a simple module, or as a method from an object put together by a large system like the full Zope 3 application framework and server. It just wants a callable. This is a little bit ugly in zope.interface… but in reality, actually, I think it works. Here’s how it could be specified:



class IWSGIApplication(Interface):
def __call__(environ, start_response):
""" Document the function """
# and/or use tagged values to set additional metadata


This just means that a WSGIApplication must be a callable object taking environ and start_response arguments. A callable object may be a function (taken from PEP 333):



def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type','text/plain')]
start_response(status, response_headers)
return ['Hello world!\n']


Or a class (the __init__ is what is callable here). Maybe the WSGI spec might also state that the result “should be iterable (support __iter__)” Maybe that’s loosely enforced, but the following example shows how the class can make separate declarations about what the class directly provides, and what its instances implement. Instead of using any decorators or magic-ish “class decorators” (the implements, classProvides calls above), we’ll make the declarations for both AppClass and simple_app in the same manner, which matches the style in PEP 3124.



class AppClass(object):
def __init__(self, environ, start_response):
self.environ = environ
self.start = start_response

def __iter__(self):
status = '200 OK'
response_headers = [('Content-type','text/plain')]
self.start(status, response_headers)
yield "Hello world!\n"

from zope.interface import directlyProvides, classImplements

# Both 'simple_app' and 'AppClass' are callable with the same arguments,
# so they both *provide* the IWSGIApplication interface

directlyProvides(simple_app, IWSGIApplication)
directlyProvides(AppClass, IWSGIApplication)

# And we can state that AppClass instances are iterable by supporting
# some phantom IIterable interface
classImplements(AppClass, IIterable)


What are the benefits of this, beyond just having a common way of spelling specifications? Instead of, or in addition to, abstract base classes, the core Python libraries can include all of these specs, even if they don’t provide any concrete implementation. Then I could have a unit test in my code that uses verifyClass or verifyObject to ensure I stay inline with the specification.



def test_verifySpec(self):
verifyClass(ICryptoHash, MyHashClass)


Then, if the specification changes in a new version of Python or in a new version of someone elses library or framework, I can be notified.



Of if the specification undergoes a big change, a new spec could be written, such as IWSGI2Application. Then by process of adaptation (not covered in this post) or interface querying, a WSGI Server could respond appropriately to implementations of the earlier spec:



if IWSGI2Application.providedBy(app):
# Yay! We don't have to do anything extra!
# ... do wsgi 2 work
elif IWSGIApplication.providedBy(app):
# We have to set up the old `start_response` object
# ... do wsgi 1 work
else:
raise UnsupportedOrUndeclaredImplementation(app)


Adaptation could provide a means of doing the above… (still, not going into the details.. trying not to!)



@implementer(IWSGI2Application)
@adapts(IWSGIApplication)
def wsgi1_to_wsgi2(app):
return wsgi2wrapper(app)

# And then, replacing the `if, else` above:
wsgi_app = IWSGI2Application(app, None)
if app is None:
raise UnsupportedOrUndeclaredImplementation(app)
# ... do wsgi2 work


When you have both specification and adaptation, then you can write your code against the spec. In the above example, the main code does IWSGI2Application(app, None) which means “for the object app, give me an object that provides IWSGI2Application, or None if there is no means of providing that interface.”



If app provides that interface directly, then app is returned directly. Otherwise an adaptation registry is found, and it’s queried for a callable object (an adapter) that will take ‘app’ as its argument and return an object that provides IWSGI2Application.



Another example: knowing that Python 3000 is going to change a lot of core specifications and implementations, such as the attributes for functions (func_code, func_defaults, etc). If an IPy2Function interface were made (and zope.interface or something like it was added to Python 2.x), then code that works with function object internals could program against their preferred spec by adding a line of code:



func = IPy2Function(func)
if my_sniffer(func.func_code):
raise Unsafe(func)


On Python 2, you’d get the regular function straight through. In Python 3000 / 3.0, an adapter would translate __code__ into func_code, for example. I don’t expect this to happen in reality, but it’s an example of how migration paths could be made between two major software versions, allowing code to run in both.



By taking advantage of this system, my company has seen more re-use with Zope 3 than at any time in our company history. And because (most of) Zope 3 is programmed against specification, we’ve been able to plug in or completely make over the whole system by providing alternative implementations of core specs. This is very hard to do in native Zope 2 (the CMF, on which Plone is based, was probably the first Zope system that started these concepts, which Plone and others were able to take advantage of by providing new tools that matched the provided spec).



At the heart of it, again, is the gentlemen’s agreement, but brought out in full: it doesn’t matter who you are or where you came from (ie, it doesn’t matter what classes are in your family tree or if you are a simple module), as long as you get the job done. There’s a simple contract, and as long as the contract is fulfilled, then everybody is happy.



But if the gentlemen involved can only come from the class system, then there’s still a nasty aristocracy that excludes a large chunk of the populace, all of whom can potentially fulfill the contract. Let’s not cause an uprising, OK?


Baca Selengkapnya ....

plispy

Posted by Unknown Senin, 30 April 2007 0 komentar

Sometimes, it just happens...


>>> pprint(
... sorted(
... map(
... linecount,
... path('.').walkfiles('*.py')
... )))

Baca Selengkapnya ....

Python's Make Rake and Bake, another and again

Posted by Unknown Selasa, 24 April 2007 0 komentar

Ian Bicking wrote a post recently titled “Python’s Makefile”. He advocates using / re-using distutils… er… setuptools. (I can’t keep them straight - they’ve both become absolute nightmares in my opinion). He then goes off about entry points, separate setup.cfg files, and other things that still go way over my head. The example he shows is convoluted, and I’m ultimately not entirely sure what he’s really advocating (besides the idea - which isn’t bad - of using the near-standard setup.py file/system instead of re-inventing).



But he mentions, earlier:




Because really people are talking about something more like rake — something where you can put together a bunch of code management tools. These aren’t commands provided by the code, these are commands used on the code.



We do have the infrastructure for this in Python, but no one is really using it. So I’m writing this to suggest people use it more: the setup.py file. So where in another environment someone does rake COMMAND, we can do python setup.py COMMAND.




For me, having an easy way to say bla bla COMMAND isn’t as important as having a good system for automating common tasks that I and/or my colleagues do frequently. As we started to depend on more and more code from internal and external repositories, due to our increased re-use when building on Zope 3, I really needed to automate checkouts and exports. Not everything was neatly packaged as an egg, or the released egg didn’t have a bugfix applied, and I still don’t understand how to make eggs work well with Zope 3 in a manner that I’m comfortable with.



I was initially excited about zc.buildout as a way to automate the monotonous but important tasks that revolve around setting up both deployment and development environments. But I didn’t like how zc.buildout specified its tasks/commands in INI format. It was relatively easy to write new ‘recipes’, so I wrote some recipes to do Subversion and CVS checkouts/exports.



But the INI format just pissed me off. It didn’t fit my needs, basically, wherein I needed more conditional control. More code control. And managing complex sets of parameters required making new top-level sections instead of nesting. Before long I was staring at a very long and very narrow file. And in the end, it was building Zope in a way that wouldn’t work for us. So I abandoned it.



I briefly looked at some tools that let you write these task files in “pure” Python. In this way, Scons appeared to be the closest thing in Python to Rake, which uses Ruby. But Scons seemed far more focused on general compilation issues (compiling C, Java, etc), but that’s never a problem that crosses my path.



I just wanted something like rake. What I liked about every Rakefile that I’ve seen is that it’s been quite readable. Rake makes common file / path commands readily available as Ruby methods, classes, and objects. Rake takes advantage of Ruby’s syntax, particularly blocks (and optional parenthesis) in a way that makes it not seem like, well, Ruby. It looks like something makefile-ish, something shell-scripting-ish, etc. That’s what I wanted; but, of course, in Python.



So I came up with a system. It’s not yet released to the world - far from finished, and there are many competing ideas out there that I don’t feel like competing with - but it’s already proven to be very useful internally. Generally, it’s been used to automate what I mentioned above: retrieving software from multiple repositories, both Subversion and CVS, and placing them in the proper directories. In particular, we try to stick with certain revisions for third party dependencies, and I got tired of trying to capture this information in READMEs and other files that we could refer to when installing certain configurations. It’s even been useful for downloading such software and applying internal patches::



patch = Command('patch')

@task('mysqldbda')
def mysqldbda():
""" Installs mysqldbda from subversion and applies patch """
svn = Subversion('svn://svn.zope.org/repos/main')
svn.co('mysqldbda/tags/mysqldbda-1.0.0', target='mysqldbda')

# patch mysqldbda
log.info("patching mysqldbda")
patchfile = path('fixes/mysqlda.1-5-07.patch')
if patchfile.exists():
print patch.read('-p1', '-i', patchfile)

@task('formencode')
def formencode():
svn = Subversion('http://svn.colorstudy.com/FormEncode')
svn.co('tags/0.6/formencode')

task('install', ['mysqldbda', 'formencode'])


It’s also been useful for tasks like getting MochiKit and generating all sorts of packed versions. A lot of what makes this possible is the path.py module, which provides a more object-oriented interface over os, os.path, and other Python file utilities.



ROCKFILEPATH = globals().get('ROCKFILEPATH', path('.'))
MOCHIKIT_LIB = ROCKFILEPATH/'libs'/'mochikit'
MOCHIKIT_DL = ROCKFILEPATH/'mochikit_dl'
MOCHIKIT_SRC = MOCHIKIT_DL/'MochiKit'
SCRATCH = MOCHIKIT_LIB/'_scratch.js'
mochikit = namespace('mochikit')

@mochikit.task('get')
def getmochikit():
if MOCHIKIT_DL.exists() and bool(MOCHIKIT_DL.listdir()):
return
svn = Subversion('http://svn.mochikit.com/mochikit')
svn.co('trunk', target=MOCHIKIT_DL)

@mochikit.task('clearmochilib')
def clearmochilib():
for jscript in MOCHIKIT_LIB.files('*.js'):
jscript.remove()

@mochikit.task('make-noexport')
def makenoexport():
info = Subversion().info(MOCHIKIT_DL)
src = NOEXPORT.safe_substitute(**info)
file(MOCHIKIT_LIB/'NoExport.js','w').write(src)

@mochikit.task('build', ['get', 'clearmochilib', 'make-noexport'])
def mochi_install():
for source in MOCHIKIT_SRC.files('*.js'):
log.info('copy %s -> %s' % (source, MOCHIKIT_LIB))
source.copy(MOCHIKIT_LIB)

# Javascript Packing tools (JSPack not shown - essentially it's a wrapper
# around combining and piping Javascript through Dojo's custom_rhino.jar
# to use its compression system)
def packmodules(sourcedir, modules, target):
mods = [ (sourcedir/mod) for mod in modules ]
log.info('Packing %s modules', path(target).name)
JSPack(mods, target).run()

if SCRATCH.exists():
SCRATCH.remove()

def jsmin(sources, target):
packmodules(MOCHIKIT_LIB, sources, MOCHIKIT_LIB/'min'/target)

@mochikit.task('minimize')
def mochiMinimize():
"""
Generates packed versions of most individual MochiKit files, while
combining a few core ones together.
"""
mindir = MOCHIKIT_LIB/'min'
for jscript in mindir.files('*.js'):
jscript.remove()
jsmin(['NoExport.js', 'Base.js', 'Iter.js', 'DOM.js'], 'base-iter-dom.js')
jsmin(['Style.js', 'Signal.js'], 'style-signal.js')
jsmin(['Async.js'], 'async.js')
jsmin(['Color.js'], 'color.js')
# ...

mochikit.task('install', ['build', 'minimize']).comment('INSTALL!')


I don’t think this falls under the jurisdiction of setup.py (distutils/setuptools). Nor would I want to specify these as zc.buildout recipes and have a separate configuration file to then name all of the files and directories. And, being Python, I don’t really have to deal with compilation steps so I don’t need wrappers around gcc and friends. I’m not (yet) specifying how to build large deployment scenarios. I just need to automate some development tasks, and I need to be able to write them easily. I want to write them in Python, but I want to ensure that they don’t accidentally get imported into normal projects (hence, the files above don’t have a .py extension). And as this is a specialized task, I’ll allow myself to get away with Python shortcuts that I would never touch in normal development, such as import *. In fact, it’s the import * that gives me a lot of the common commands/tools, such as the classes for interacting with Subversion and CVS, managing working directories, etc.



This really stemmed from reading this article by Martin Fowler about people wanting to replace ant with Rake with the advent of JRuby. In the post, Martin states:




The thing with build scripts is that you need both declarative and procedural qualities. The heart of a build file is defining tasks and the dependencies between them. This is the declarative part, and is where tools like ant and make excel. The trouble is that as builds get more complex these structures aren’t enough. You begin to need conditional logic; in particular you need the ability to define your own abstractions. (See my rake article for examples.)



Rake’s strength is that it gives you both of these. It provides a simple declarative syntax to define tasks and dependencies, but because this syntax is an internal DomainSpecificLanguage, you can seamlessly weave in the full power of Ruby.




At that point, I decided that this was the way to go: use Python decorators to wrap ‘task’ functions. The wrapper maintains dependency links, comments, and other things of interest to the internal system; and the wrapper allows the task name to be independent of the function name, allowing easier-to-type tasks for use from the file system. But the ‘task’ function is plain Python. Or, like some of the examples above show, task can be called without the @ symbol that makes it a decorator. Multiple callable actions can be added to a task, potentially allowing for more ‘declarative’ style:



mochikit.task('minimize').using_action(
JSMinMap(
{'style-signal.js': ['Style.js', 'Signal.js']},
{'async.js': ['Async.js']},
))


Useful, I imagine, for very common patterns. Er. “Recipes”. In any case, it’s a very useful kind of tool. Beats setup.py, INI, or XML based automation language any day.


Baca Selengkapnya ....

Cock Radio

Posted by Unknown Rabu, 18 April 2007 0 komentar

This whole Don Imus issue has confused the shit out of me. Talk radio is full of that kind of, um, talk. Anyways, it all feels like the first episode of the latest season of South Park. Sometimes, South Park can crank out a new episode in response to a very recent event, but that didn’t happen here. This episode aired weeks earlier.



As for how or why this Don Imus issue exploded in the way that it did - I just don’t understand (and now I feel like Stan Marsh at the end of that South Park episode). There are so many similar things said all the time by many radio “personalities.”



Media Matters has an excellent post up chronicling the many slurs of Glenn Beck, O’Reilly, and more: It’s not just Imus.



The response to the whole Imus situation just seems wrong: a cause celeb on which everyone can jump. The latest distraction. How the hell did it get so out of hand? Who did it really offend? Why this “nappy headed ho’s” statement? Why not “ghetto slut” (Boortz)? “Turbanned hoodlums” (Savage)?



Imus is probably far less offensive than many of these other radio people, and neither his firing nor all of this special attention is going to make anything better. Nor did it solve anything. It just provided everybody with some bullshit theater.


Baca Selengkapnya ....

What about search?

Posted by Unknown Selasa, 17 April 2007 0 komentar

From my tumblog: but I don't want my search engine to be a slide show!


Google - remember that search engine of yours? How about making it better by offering some options like result filtering ("I don't feel like shopping right now, I'm trying to research")?


Baca Selengkapnya ....

Reuse and non use

Posted by Unknown 0 komentar

We’ve been using Zope 3 in earnest for just over a year and a half now. I would like to report that in that year and a half our little company has achieved more re-use than at any time in our history. This is real re-use too: libraries of tools and objects that are easily shared among both horizontal and vertical markets, yet customized for each customer as needed. Benefits for one are fairly easily shared with all.



In the Zope 2 days, we tried hard to achieve this. But we were constantly having to re-invent the kind of architecture that I believe really makes this work: adaptation, which also brings dynamic view binding, dynamic UI generation (ie - registering a ‘tab’ for a particular object / interface and having it show up in the UI as necessary, etc. We had to spend a lot of time making the frameworks that would let us make frameworks.



“Frameworks for making frameworks?” - you heard right. Let’s face it: most web work is custom development. Sometimes custom development is best served by tools like Ruby on Rails or Pylons, or even by plain old PHP. But sometimes you know you’re going to have at least five customers all needing variations on the same thing in the coming months; and potentially more after that. You’re going to need to at least make a library or two.



See, Model-View-Controller isn’t just about “separating business logic from presentation”. It’s about separating it in a way that you can take business objects and logic (the ‘model’ layer; or models and services) and put more than one view on them. And by “more than one view”, I don’t mean “more than one template.” I mean putting wholly different user interfaces on it. I mean being able to take a base library and override a few select options (or many select options) as they appeal to a customer.



We tried to achieve this on some of our Zope 2 products, but it was hard to extract frameworks. We did OK, however, but I think that the most re-use we ever got was about three or four customers on one toolkit. That was over a three or four year span. We re-used patterns and snippets quite often, but it took a lot of work to extract an e-commerce toolkit from a particular customer’s site, and more work still to make it adaptable and workable for different customer requirements.



In the year and a half since using Zope 3 full time, we’ve had double that - and with far greater results. It’s not an easy system to just start using from scratch, but it can be quite worth it.



Being back at work on some legacy Zope 2 projects has made me all the more appreciative.



By the way: for a simpler Zope 3 development experience, check out Grok.


Baca Selengkapnya ....

Tumbling Dirty

Posted by Unknown Senin, 16 April 2007 0 komentar

Oh yeah: Dirty Modern. My tumblelog, generally more focused on design, music, etc.. We'll see.


I haven't posted too much in Griddle Noise because it's quite hard, sometimes, to write short entries. I always liked the tumblelog format for being explicitly simple. And Tumblr has an excellent bookmarklet for posting entries.


Baca Selengkapnya ....

The Web Will Not Replace the Desktop

Posted by Unknown Jumat, 13 April 2007 0 komentar
Web 2.0 has excited us because we lowered our expectations so much. Of course web apps will get better, and one day will deliver the functionality we currently get from desktop software. They may even do more than our desktop applications one day. But isn’t it a tad strange that we think this is all a huge leap forward? - loose wire blog: It's Not the "Death" of Microsoft, it's the "Death" of Software


The author's main point is that while it's cool that people are making Mind Mapping tools in DHTML, they're still a long ways behind desktop apps like MindManager. He goes on to contend that there's just nothing exciting in the "offline" world in recent years.



While it's true that the web has made for some neat and very useful online tools, there are classes of software missed. There's a reason why we'll never get to the "every computer is just a web browser / flash player" ideal: professional software. It's a world I'm entering again as I'm finally getting my home studio together.



I'm talking about apps like Pro Tools, Reaktor, Final Cut, DVD Studio, Aperture, Lightroom, InDesign, Quark XPress, etc.



Granted, most people don't use those applications, but I think that it's a growing market. As technology grows and commodifies, we need tools to deal with it. The gap between pretty-good consumer gear and pretty-good entry level professional gear is pretty small now in many areas: digital photography, digital video, music, etc.



Now that I think about it, Apple has realized this for some time. They have a pretty good upgrade path. For those who get hooked playing with video in iMovie, there's Final Cut Express at a rather reasonable price. Those who really start to do well with that can go up to the full Final Cut Pro. For Music, there's Garage Band, Logic Express, and Logic Pro.



Within those realms, there's a huge array of plug-ins, virtual instruments, specialized sound tools and environments (Ableton Live, Reaktor 5, Max/MSP, etc). I am amazed at the sounds I get out of Reaktor, and that's only a single product in a single company's impressive set of offerings.



Perhaps the new web applications are freeing up resources on our own machines so that it no longer feels like some dreadful work environment, just at home. There are plenty of useful and usable online tools for doing quick writing, sharing, interacting, thinking, and planning. Typically they offer enough to be usable for those small (or even medium) jobs we occasionally encounter, while freeing us from having something overkill like Office for casual, personal writing. It's easier to specialize a computer for audio work by fine tuning system settings, throwing away silly applications, etc, without making that computer into an island. As long as you have a web browser, you can still check email, contribute to a planning document, etc.



But honestly, I don't think the Desktop is going to die - ever. It's great that we can do so much on the web, but I don't think the native experience is going to die, ever.



And even if you're not on a fucking plane, it does matter: when I moved into this loft, it took me a couple of months to get internet access down here. I was working on a lot of things for the office at the time, and I was able to take it home by just using my laptop: at work, I'd synchronise source code, copy stuff to my laptop and/or sync with .Mac's iDisk, and sync with .Mac for my calendar, etc; as such, even though I was offline, I could work. It was then and there, however, that I decided that although I liked Backpack, it wasn't worth paying for: I needed offline access. I needed, well, OmniOutliner and Tinderbox. My personal project files and note-taking documents are just too precious to be left online (this is why I don't and won't use Stikkit). If the occasional monster storm comes along and takes away the Internet for a few days, the worst feeling in the world would be being disconnected from my notes.



Strangely enough, these online note-takers, organizers, etc, all solve a problem that has plagued me until quite recently: how to do effective sharing of data between home, work, and laptop? How to not get out of sync? I love Tinderbox and I have a couple of big Tinderbox files that I keep on .Mac's iDisk. This means I usually have access to it. But sometimes, I forget to sync or close or save the document when I leave work or close the laptop. What about the little bits of random data, not yet filed, or not really worth filing into that larger document? How can I quickly enter, find, and share that info?



The answer didn't come from any web service, although lord knows I tried a few. The answer came when Tinderbox's developer, Eastgate, ingeniously started bundling Barebones' Yojimbo with Tinderbox. I had looked at Yojimbo in the past, but I'd gone through so many personal note taking / note capturing / note filing systems (Mac OS X has MANY). I didn't want to look at another such product and be fighting between "do I file it in Yojimbo? In Tinderbox? DEVONThink? Can I get to it from home?"



But Yojimbo has a killer feature: it syncs with .Mac! The same tool that I've used to keep calendars and contacts and Safari bookmarks transparently shared between three machines, finally someone made one of these note tools that took advantage. Now I have my enter-a-quick-note, file-it-later system that gets updated and merged automatically. No worries about having an out-of-date iDisk, about forgetting to save and sync. And best of all - it's 100% native and usable offline. And it doesn't get lost in the army of tabs since every goddamn web "app" is now just something that gets lost in a browser window (for those who wonder why I take so long to reply to mail sent to my GMail account, well, GMail sucks as an application compared to a native mail app. I just don't watch it regularly enough to stay on top of things).



So, anyways, I love a lot of the new web apps. But people need to get a grip. I've been hearing about "the death of the desktop" for eleven or so years now. Stop tricking yourself into thinking you're that high and mighty. If you don't understand the true value of native applications, professional applications, personal data, then you don't understand the desktop's power. As such, you're not going to kill it. Yes, please focus on tools that take work well on the web, tools that are simple yet useful. But don't think for a second that I want to spend every second of my computing day in a web browser, nor do I want to spend every second in an Apollo client. It's not going to replace everything, any more than Java, Netscape Constellation, or even Active Desktop did.


Baca Selengkapnya ....
Trik SEO Terbaru support Online Shop Baju Wanita - Original design by Bamz | Copyright of apk zipalign.