plispy
Sometimes, it just happens...
>>> pprint(
... sorted(
... map(
... linecount,
... path('.').walkfiles('*.py')
... )))
Baca Selengkapnya ....
Sometimes, it just happens...
>>> pprint(
... sorted(
... map(
... linecount,
... path('.').walkfiles('*.py')
... )))
Ian Bicking wrote a post recently titled “Python’s Makefile”. He advocates using / re-using distutils… er… setuptools. (I can’t keep them straight - they’ve both become absolute nightmares in my opinion). He then goes off about entry points, separate setup.cfg
files, and other things that still go way over my head. The example he shows is convoluted, and I’m ultimately not entirely sure what he’s really advocating (besides the idea - which isn’t bad - of using the near-standard setup.py
file/system instead of re-inventing).
But he mentions, earlier:
Because really people are talking about something more like rake — something where you can put together a bunch of code management tools. These aren’t commands provided by the code, these are commands used on the code.
We do have the infrastructure for this in Python, but no one is really using it. So I’m writing this to suggest people use it more: the setup.py file. So where in another environment someone does
rake COMMAND
, we can dopython setup.py COMMAND
.
For me, having an easy way to say bla bla COMMAND
isn’t as important as having a good system for automating common tasks that I and/or my colleagues do frequently. As we started to depend on more and more code from internal and external repositories, due to our increased re-use when building on Zope 3, I really needed to automate checkouts and exports. Not everything was neatly packaged as an egg, or the released egg didn’t have a bugfix applied, and I still don’t understand how to make eggs work well with Zope 3 in a manner that I’m comfortable with.
I was initially excited about zc.buildout as a way to automate the monotonous but important tasks that revolve around setting up both deployment and development environments. But I didn’t like how zc.buildout
specified its tasks/commands in INI format. It was relatively easy to write new ‘recipes’, so I wrote some recipes to do Subversion and CVS checkouts/exports.
But the INI format just pissed me off. It didn’t fit my needs, basically, wherein I needed more conditional control. More code control. And managing complex sets of parameters required making new top-level sections instead of nesting. Before long I was staring at a very long and very narrow file. And in the end, it was building Zope in a way that wouldn’t work for us. So I abandoned it.
I briefly looked at some tools that let you write these task files in “pure” Python. In this way, Scons appeared to be the closest thing in Python to Rake, which uses Ruby. But Scons seemed far more focused on general compilation issues (compiling C, Java, etc), but that’s never a problem that crosses my path.
I just wanted something like rake. What I liked about every Rakefile that I’ve seen is that it’s been quite readable. Rake makes common file / path commands readily available as Ruby methods, classes, and objects. Rake takes advantage of Ruby’s syntax, particularly blocks (and optional parenthesis) in a way that makes it not seem like, well, Ruby. It looks like something makefile-ish, something shell-scripting-ish, etc. That’s what I wanted; but, of course, in Python.
So I came up with a system. It’s not yet released to the world - far from finished, and there are many competing ideas out there that I don’t feel like competing with - but it’s already proven to be very useful internally. Generally, it’s been used to automate what I mentioned above: retrieving software from multiple repositories, both Subversion and CVS, and placing them in the proper directories. In particular, we try to stick with certain revisions for third party dependencies, and I got tired of trying to capture this information in READMEs and other files that we could refer to when installing certain configurations. It’s even been useful for downloading such software and applying internal patches::
patch = Command('patch')
@task('mysqldbda')
def mysqldbda():
""" Installs mysqldbda from subversion and applies patch """
svn = Subversion('svn://svn.zope.org/repos/main')
svn.co('mysqldbda/tags/mysqldbda-1.0.0', target='mysqldbda')
# patch mysqldbda
log.info("patching mysqldbda")
patchfile = path('fixes/mysqlda.1-5-07.patch')
if patchfile.exists():
print patch.read('-p1', '-i', patchfile)
@task('formencode')
def formencode():
svn = Subversion('http://svn.colorstudy.com/FormEncode')
svn.co('tags/0.6/formencode')
task('install', ['mysqldbda', 'formencode'])
It’s also been useful for tasks like getting MochiKit and generating all sorts of packed versions. A lot of what makes this possible is the path.py module, which provides a more object-oriented interface over os
, os.path
, and other Python file utilities.
ROCKFILEPATH = globals().get('ROCKFILEPATH', path('.'))
MOCHIKIT_LIB = ROCKFILEPATH/'libs'/'mochikit'
MOCHIKIT_DL = ROCKFILEPATH/'mochikit_dl'
MOCHIKIT_SRC = MOCHIKIT_DL/'MochiKit'
SCRATCH = MOCHIKIT_LIB/'_scratch.js'
mochikit = namespace('mochikit')
@mochikit.task('get')
def getmochikit():
if MOCHIKIT_DL.exists() and bool(MOCHIKIT_DL.listdir()):
return
svn = Subversion('http://svn.mochikit.com/mochikit')
svn.co('trunk', target=MOCHIKIT_DL)
@mochikit.task('clearmochilib')
def clearmochilib():
for jscript in MOCHIKIT_LIB.files('*.js'):
jscript.remove()
@mochikit.task('make-noexport')
def makenoexport():
info = Subversion().info(MOCHIKIT_DL)
src = NOEXPORT.safe_substitute(**info)
file(MOCHIKIT_LIB/'NoExport.js','w').write(src)
@mochikit.task('build', ['get', 'clearmochilib', 'make-noexport'])
def mochi_install():
for source in MOCHIKIT_SRC.files('*.js'):
log.info('copy %s -> %s' % (source, MOCHIKIT_LIB))
source.copy(MOCHIKIT_LIB)
# Javascript Packing tools (JSPack not shown - essentially it's a wrapper
# around combining and piping Javascript through Dojo's custom_rhino.jar
# to use its compression system)
def packmodules(sourcedir, modules, target):
mods = [ (sourcedir/mod) for mod in modules ]
log.info('Packing %s modules', path(target).name)
JSPack(mods, target).run()
if SCRATCH.exists():
SCRATCH.remove()
def jsmin(sources, target):
packmodules(MOCHIKIT_LIB, sources, MOCHIKIT_LIB/'min'/target)
@mochikit.task('minimize')
def mochiMinimize():
"""
Generates packed versions of most individual MochiKit files, while
combining a few core ones together.
"""
mindir = MOCHIKIT_LIB/'min'
for jscript in mindir.files('*.js'):
jscript.remove()
jsmin(['NoExport.js', 'Base.js', 'Iter.js', 'DOM.js'], 'base-iter-dom.js')
jsmin(['Style.js', 'Signal.js'], 'style-signal.js')
jsmin(['Async.js'], 'async.js')
jsmin(['Color.js'], 'color.js')
# ...
mochikit.task('install', ['build', 'minimize']).comment('INSTALL!')
I don’t think this falls under the jurisdiction of setup.py
(distutils/setuptools). Nor would I want to specify these as zc.buildout
recipes and have a separate configuration file to then name all of the files and directories. And, being Python, I don’t really have to deal with compilation steps so I don’t need wrappers around gcc
and friends. I’m not (yet) specifying how to build large deployment scenarios. I just need to automate some development tasks, and I need to be able to write them easily. I want to write them in Python, but I want to ensure that they don’t accidentally get imported into normal projects (hence, the files above don’t have a .py
extension). And as this is a specialized task, I’ll allow myself to get away with Python shortcuts that I would never touch in normal development, such as import *
. In fact, it’s the import *
that gives me a lot of the common commands/tools, such as the classes for interacting with Subversion and CVS, managing working directories, etc.
This really stemmed from reading this article by Martin Fowler about people wanting to replace ant with Rake with the advent of JRuby. In the post, Martin states:
The thing with build scripts is that you need both declarative and procedural qualities. The heart of a build file is defining tasks and the dependencies between them. This is the declarative part, and is where tools like ant and make excel. The trouble is that as builds get more complex these structures aren’t enough. You begin to need conditional logic; in particular you need the ability to define your own abstractions. (See my rake article for examples.)
Rake’s strength is that it gives you both of these. It provides a simple declarative syntax to define tasks and dependencies, but because this syntax is an internal DomainSpecificLanguage, you can seamlessly weave in the full power of Ruby.
At that point, I decided that this was the way to go: use Python decorators to wrap ‘task’ functions. The wrapper maintains dependency links, comments, and other things of interest to the internal system; and the wrapper allows the task name to be independent of the function name, allowing easier-to-type tasks for use from the file system. But the ‘task’ function is plain Python. Or, like some of the examples above show, task
can be called without the @
symbol that makes it a decorator. Multiple callable actions can be added to a task, potentially allowing for more ‘declarative’ style:
mochikit.task('minimize').using_action(
JSMinMap(
{'style-signal.js': ['Style.js', 'Signal.js']},
{'async.js': ['Async.js']},
))
Useful, I imagine, for very common patterns. Er. “Recipes”. In any case, it’s a very useful kind of tool. Beats setup.py
, INI
, or XML based automation language any day.
This whole Don Imus issue has confused the shit out of me. Talk radio is full of that kind of, um, talk. Anyways, it all feels like the first episode of the latest season of South Park. Sometimes, South Park can crank out a new episode in response to a very recent event, but that didn’t happen here. This episode aired weeks earlier.
As for how or why this Don Imus issue exploded in the way that it did - I just don’t understand (and now I feel like Stan Marsh at the end of that South Park episode). There are so many similar things said all the time by many radio “personalities.”
Media Matters has an excellent post up chronicling the many slurs of Glenn Beck, O’Reilly, and more: It’s not just Imus.
The response to the whole Imus situation just seems wrong: a cause celeb on which everyone can jump. The latest distraction. How the hell did it get so out of hand? Who did it really offend? Why this “nappy headed ho’s” statement? Why not “ghetto slut” (Boortz)? “Turbanned hoodlums” (Savage)?
Imus is probably far less offensive than many of these other radio people, and neither his firing nor all of this special attention is going to make anything better. Nor did it solve anything. It just provided everybody with some bullshit theater.
From my tumblog: but I don't want my search engine to be a slide show!
Google - remember that search engine of yours? How about making it better by offering some options like result filtering ("I don't feel like shopping right now, I'm trying to research")?
We’ve been using Zope 3 in earnest for just over a year and a half now. I would like to report that in that year and a half our little company has achieved more re-use than at any time in our history. This is real re-use too: libraries of tools and objects that are easily shared among both horizontal and vertical markets, yet customized for each customer as needed. Benefits for one are fairly easily shared with all.
In the Zope 2 days, we tried hard to achieve this. But we were constantly having to re-invent the kind of architecture that I believe really makes this work: adaptation, which also brings dynamic view binding, dynamic UI generation (ie - registering a ‘tab’ for a particular object / interface and having it show up in the UI as necessary, etc. We had to spend a lot of time making the frameworks that would let us make frameworks.
“Frameworks for making frameworks?” - you heard right. Let’s face it: most web work is custom development. Sometimes custom development is best served by tools like Ruby on Rails or Pylons, or even by plain old PHP. But sometimes you know you’re going to have at least five customers all needing variations on the same thing in the coming months; and potentially more after that. You’re going to need to at least make a library or two.
See, Model-View-Controller isn’t just about “separating business logic from presentation”. It’s about separating it in a way that you can take business objects and logic (the ‘model’ layer; or models and services) and put more than one view on them. And by “more than one view”, I don’t mean “more than one template.” I mean putting wholly different user interfaces on it. I mean being able to take a base library and override a few select options (or many select options) as they appeal to a customer.
We tried to achieve this on some of our Zope 2 products, but it was hard to extract frameworks. We did OK, however, but I think that the most re-use we ever got was about three or four customers on one toolkit. That was over a three or four year span. We re-used patterns and snippets quite often, but it took a lot of work to extract an e-commerce toolkit from a particular customer’s site, and more work still to make it adaptable and workable for different customer requirements.
In the year and a half since using Zope 3 full time, we’ve had double that - and with far greater results. It’s not an easy system to just start using from scratch, but it can be quite worth it.
Being back at work on some legacy Zope 2 projects has made me all the more appreciative.
By the way: for a simpler Zope 3 development experience, check out Grok.
Oh yeah: Dirty Modern. My tumblelog, generally more focused on design, music, etc.. We'll see.
I haven't posted too much in Griddle Noise because it's quite hard, sometimes, to write short entries. I always liked the tumblelog format for being explicitly simple. And Tumblr has an excellent bookmarklet for posting entries.
Web 2.0 has excited us because we lowered our expectations so much. Of course web apps will get better, and one day will deliver the functionality we currently get from desktop software. They may even do more than our desktop applications one day. But isn’t it a tad strange that we think this is all a huge leap forward? - loose wire blog: It's Not the "Death" of Microsoft, it's the "Death" of Software
The author's main point is that while it's cool that people are making Mind Mapping tools in DHTML, they're still a long ways behind desktop apps like MindManager. He goes on to contend that there's just nothing exciting in the "offline" world in recent years.
While it's true that the web has made for some neat and very useful online tools, there are classes of software missed. There's a reason why we'll never get to the "every computer is just a web browser / flash player" ideal: professional software. It's a world I'm entering again as I'm finally getting my home studio together.
I'm talking about apps like Pro Tools, Reaktor, Final Cut, DVD Studio, Aperture, Lightroom, InDesign, Quark XPress, etc.
Granted, most people don't use those applications, but I think that it's a growing market. As technology grows and commodifies, we need tools to deal with it. The gap between pretty-good consumer gear and pretty-good entry level professional gear is pretty small now in many areas: digital photography, digital video, music, etc.
Now that I think about it, Apple has realized this for some time. They have a pretty good upgrade path. For those who get hooked playing with video in iMovie, there's Final Cut Express at a rather reasonable price. Those who really start to do well with that can go up to the full Final Cut Pro. For Music, there's Garage Band, Logic Express, and Logic Pro.
Within those realms, there's a huge array of plug-ins, virtual instruments, specialized sound tools and environments (Ableton Live, Reaktor 5, Max/MSP, etc). I am amazed at the sounds I get out of Reaktor, and that's only a single product in a single company's impressive set of offerings.
Perhaps the new web applications are freeing up resources on our own machines so that it no longer feels like some dreadful work environment, just at home. There are plenty of useful and usable online tools for doing quick writing, sharing, interacting, thinking, and planning. Typically they offer enough to be usable for those small (or even medium) jobs we occasionally encounter, while freeing us from having something overkill like Office for casual, personal writing. It's easier to specialize a computer for audio work by fine tuning system settings, throwing away silly applications, etc, without making that computer into an island. As long as you have a web browser, you can still check email, contribute to a planning document, etc.
But honestly, I don't think the Desktop is going to die - ever. It's great that we can do so much on the web, but I don't think the native experience is going to die, ever.
And even if you're not on a fucking plane, it does matter: when I moved into this loft, it took me a couple of months to get internet access down here. I was working on a lot of things for the office at the time, and I was able to take it home by just using my laptop: at work, I'd synchronise source code, copy stuff to my laptop and/or sync with .Mac's iDisk, and sync with .Mac for my calendar, etc; as such, even though I was offline, I could work. It was then and there, however, that I decided that although I liked Backpack, it wasn't worth paying for: I needed offline access. I needed, well, OmniOutliner and Tinderbox. My personal project files and note-taking documents are just too precious to be left online (this is why I don't and won't use Stikkit). If the occasional monster storm comes along and takes away the Internet for a few days, the worst feeling in the world would be being disconnected from my notes.
Strangely enough, these online note-takers, organizers, etc, all solve a problem that has plagued me until quite recently: how to do effective sharing of data between home, work, and laptop? How to not get out of sync? I love Tinderbox and I have a couple of big Tinderbox files that I keep on .Mac's iDisk. This means I usually have access to it. But sometimes, I forget to sync or close or save the document when I leave work or close the laptop. What about the little bits of random data, not yet filed, or not really worth filing into that larger document? How can I quickly enter, find, and share that info?
The answer didn't come from any web service, although lord knows I tried a few. The answer came when Tinderbox's developer, Eastgate, ingeniously started bundling Barebones' Yojimbo with Tinderbox. I had looked at Yojimbo in the past, but I'd gone through so many personal note taking / note capturing / note filing systems (Mac OS X has MANY). I didn't want to look at another such product and be fighting between "do I file it in Yojimbo? In Tinderbox? DEVONThink? Can I get to it from home?"
But Yojimbo has a killer feature: it syncs with .Mac! The same tool that I've used to keep calendars and contacts and Safari bookmarks transparently shared between three machines, finally someone made one of these note tools that took advantage. Now I have my enter-a-quick-note, file-it-later system that gets updated and merged automatically. No worries about having an out-of-date iDisk, about forgetting to save and sync. And best of all - it's 100% native and usable offline. And it doesn't get lost in the army of tabs since every goddamn web "app" is now just something that gets lost in a browser window (for those who wonder why I take so long to reply to mail sent to my GMail account, well, GMail sucks as an application compared to a native mail app. I just don't watch it regularly enough to stay on top of things).
So, anyways, I love a lot of the new web apps. But people need to get a grip. I've been hearing about "the death of the desktop" for eleven or so years now. Stop tricking yourself into thinking you're that high and mighty. If you don't understand the true value of native applications, professional applications, personal data, then you don't understand the desktop's power. As such, you're not going to kill it. Yes, please focus on tools that take work well on the web, tools that are simple yet useful. But don't think for a second that I want to spend every second of my computing day in a web browser, nor do I want to spend every second in an Apollo client. It's not going to replace everything, any more than Java, Netscape Constellation, or even Active Desktop did.