Wednesday, October 31, 2012

Hey Loopmasters: I'd Buy MIDI Loops Too

I often get loops from Loopmasters Artist Series -- in fact, I've probably spent more than a thousand dollars on that series -- but there's one thing I hate about loop-based production: loop libraries don't ship with MIDI representations.

If you find an amazing bassline (both Nick Thayer products are full of them) but you want to hear it on a different instrument, there's not much you can do.

On a good day, I can figure out the notes on a guitar or a synth, and even on a bad day, I can probably run it through Coda note-by-note, but it'd be a lot more fun if sampling libraries just came with MIDI loops.

Update: looks like the industry-standard solution to this problem is Celemony Melodyne, and Ableton 9 will feature something very similar as well.

Tuesday, October 30, 2012

Atwood: Learn The First Thing About Open Source

Somebody who apparently enjoys Internet drama, run-on sentences, and the passive voice recently said:

Standards processes are not to be missed. They are grand spectacles that unfold in real-time. They are fraught with personalities, big egos, and grand ideological dialogs it's the West Wing, but like the parts that are between the snappy dialog.

All this is of course in reference to Jeff Atwood. I owe Jeff Atwood a lot. I used to aspire to being a real name in the world of tech blogging, until Jeff Atwood conclusively demonstrated that this is not necessarily something to be proud of. The enormous success of his banal and pointless nattering inspired me to find a more interesting goal. He's kind of like Kafka meets the Cookie Monster: he doesn't appear to have ever encountered a method of boring people to death which he didn't relish to the point of near abandon.

Several years ago, I compared Atwood to Jerry Springer:

Watching Jeff Atwood pick a fight with Paul Graham was marvelous entertainment. Not only did we get to see a baboon flinging its doodoo at a lion, we got to see how silly and befuddled the lion had gotten in the winter of its old age. Paul Graham first defended himself, then wrote a post called How To Disagree - which Atwood pretty accurately described as "an EULA for disagreeing with Paul Graham" - and finally had Atwood over for a Y Combinator dinner where they presumably hashed out their differences.

Atwood launched a similar attack on DHH as well. I can only assume he was angling for a dinner with DHH but may be willing to settle for lunch. Or in fact a photo opportunity. However, DHH never responded at all.


This appears to be something Atwood does a lot. He may have even met his business partner this way, and the last time I dissed him, he sent me these "let's be friends" emails, which sounds great in theory but for some reason actually creeped me out a little. So, just to be clear, Jeff, if you're reading this, I have no interest in starting a company with you, and I'm not going to invite you to dinner.

Atwood's latest stalk-target: John Gruber. I'm a fan; in fact, it's agony to me that the "canonical" book on Steve Jobs didn't come from the guy who writes the canonical blog, because if you want to understand Jobs, you go to the blog, not the book. I've always hoped Gruber would one day write his own book on Apple. But instead, I may now have to witness a Gruber/Atwood collaboration. Atwood trolled Gruber, seeking to rope him into some standards committee bullshit around Markdown, instead of doing the respectable thing and just being awesome.

As a fan of Gruber's, I'm ashamed to say that Gruber failed to reach DHH-level disdain, and responded to Atwood's dull trolling -- as indeed I am doing now -- but I can happily report that Gruber understood the purpose of the conversation:

@codinghorror When you tell me to jump, should I ask “How high?”

@codinghorror Next step is for you to offer a $5m donation if I release my college records, right?

I'd love to call the blog post at the root of Atwood's latest shenanigans "a tale told by an idiot, full of sound and fury, signifying nothing." But that's a better description for the movies of Michael Bay, who may actually be a pretty smart guy. Atwood specializes in tales told by an idiot which are nonetheless both sound- and fury-free. I hope he skips straight to the signifying nothing for the sake of expediency, but I suspect he does it because he lacks imagination.

Atwood trolls as a form of recruiting and introduction. It is the modus operandi of a bully. That probably makes Atwood himself a bully of some kind -- perhaps a verbal bully, but one whose words make for incredibly dull weapons -- but he's so fundamentally unintimidating that it's probably more reasonable to call him an aspiring bureaucrat.

Atwood blogged about Markdown a few days ago, but also back in 2009:

The biggest problem with Markdown: John Gruber.

...the fact that there has been no improvement whatsoever to the specification or reference implementation for five years is kind of a problem.

There are some fairly severe bugs in that now-ancient 2004 Markdown 1.0.1 Perl implementation. Bugs that John has already fixed in eight 1.0.2 betas that have somehow never seen the light of day. Sure, if you know the right Google incantations you can dig up the unreleased 1.0.2b8 archive, surreptitiously posted May 2007, and start prying out the bugfixes by hand. That's what I've had to do to fix bugs in our open sourced C# Markdown implementation, which was naturally based on that fateful (and technically only) 1.0.1 release.

I'd also expect a reference implementation to come with some basic test suites or sample input/output files so I can tell if I've implemented it correctly. No such luck; the official archives from Gruber's site include the naked Perl file along with a readme and license. The word "test" does not appear in either. I had to do a ton more searching to finally dig up MDTest 1.1. I can't quite tell where the tests came from, but they seem to be maintained by Michel Fortin, the author of the primary PHP Markdown implementation.

But John Gruber created Markdown. He came up with the concept and the initial implementation. He is, in every sense of the word, the parent of Markdown. It's his baby.

As Markdown's "parent", John has a few key responsibilities in shepherding his baby to maturity. Namely, to lead. To set direction. Beyond that initial 2004 push, he's done precious little of either.


This is patently ridiculous. Creating an open source project in 2004 does not set you up with obligations to do anything in 2012 whatsoever. The entitled whining here is so absurd you have to read the whole thing to really get it, but I'll just summarize for you here:

"John Gruber created something. I want that thing to be different. Therefore, John Gruber hates babies."

I wish I was kidding, by the way. He literally included a picture of a baby in the blog post, and the title refers to "responsible open source parenting." He totally implies that Gruber hates babies because he didn't do work that Atwood is clearly capable of doing himself.

GitHub responded to imperfections in Markdown by creating GitHub-flavored Markdown. Some people call that being awesome; some people call it not sucking; everybody who works in open source knows that it's the only socially acceptable response. Harassing people to demand that they do things for you is not socially acceptable in open source. What is socially acceptable is creating solutions for your problems and then sharing them.

Also, as far as I'm concerned, GitHub-flavored Markdown is now the canonical implementation. That means Atwood's whole effort is barking up the wrong tree anyway. But there's a much more fundamental problem here.

Open source is about giving. It's not about obligations. It's not about "you have to create the fixes I want, or you're a bad person." Jeff Atwood does not understand the most fundamental thing about open source.

Less bullying, please. More giving.

Thursday, October 25, 2012

Some Feminist Movies

There's a lot of internet noise about sexism in the tech world.

You might wonder, if you're a dude, what the fuck am I even supposed to do about all this shit, except for not be a dick?

One option is to volunteer with an organization like RailsBridge.

Another option is to watch these movies, and think about them.

The Scream series of horror films
There's a staggering amount of feminist film theory surrounding horror films, especially slasher films, with their Final Girl trope. This is a controversial field, but the Scream series is stuffed to the gills with strong women who pass the Bechdel test with flying colors.

Ruby Sparks
A brilliant deconstruction of the Manic Pixie Dream Girl trope. Eternal Sunshine Of The Spotless Mind is popular for similar reasons, but where Eternal Sunshine scores a few snark points by dissing the Manic Pixie Dream Girl trope as cliché, Ruby Sparks completely demolishes the concept, exposing its fundamentally destructive, sexist foundations. It's like the difference between a spitball and a sledgehammer.

Somersault
An indie film from Australia which tells a very relevant coming-of-age story. Lowest popcorn entertainment factor of all the movies listed here, but illuminating.

Wednesday, October 24, 2012

New Tablet Reviews, Summarized After Painfully Extensive Research

Every Review Of The iPad Mini

It's an iPad, but smaller. Five to fifteen hundred additional words.

Every Review Of The Microsoft Surface

Microsoft has made a mostly useless device, built on faulty assumptions, which is nonetheless very interesting to play with for a certain type of curious geek (or would be, if the iPad didn't exist). In other words, they have finally caught up with the Newton. Five to fifteen hundred additional words.
Admittedly, the research was more painful than extensive.

Saturday, October 20, 2012

Damn Inflation

Damn inflation is a problem faced by programmers, and probably many other types of people. I don't mean damn inflation in contrast to regular inflation. I mean inflation in the value of an individual damn.

When you are very new to code, you have to give a damn just to indent it. But once you adopt good coding habits, and acquire skills in coding tools which will indent your code for free, indenting your code is not something you have to give a damn in order to do. You just do it, either way.

Steve Jobs achieved an amazing damn value in his lifetime. By Apple's standards, nobody at Microsoft gave a damn about Unix, typography, CSS, HTML, color schemes, usability, or numerous other things. Microsoft is and was cash-rich, but damn-poor.

Steve Jobs was lucky to be running Apple, because with so much passion invested in so many aspects of technology, it would have been otherwise impossible for him to find a computer worth a damn. This is not because computers today are not amazing; it's because Mr. Jobs had very valuable damns.

This is the same reason it's hard for web developers to find a Twitter client which is worth a damn. If you're a web developer with a background in fine arts and graphic design, it's even worse. And if you're not just a web developer with a visual background, but also one with skills in online marketing, who's studied the research around distraction and productivity, the only way you can measure the effort invested in most Twitter clients is with the picodamn, a very modest unit of measure which normally only sees use when somebody wants to measure how much opinions are worth on Hacker News.

Damn inflation is why you have to be careful what you invest your energy in.

Thursday, October 18, 2012

Mock Bad Code (Because Fractals)

Everybody encounters bad code sometimes, and wonders what to do.

The answer is simple.

You should mock it, in the sense of making fun of it, because it's often funny.

You should also mock it, in the sense of testing its implementation details, because you will need to rewrite it, and you will need to verify that it works the way exactly how you think it does. Because it probably doesn't.

The first thing that people do to figure out code is they read it. In the case of bad code, it's a mistake.

Bad code never does exactly what it says it does. Bad code is hard to read, or wildly misguided, or both.

You only have three ways to find out what code does: read it, test it, or test it in production. Testing code in production is obviously a bad idea. But when code is bad, reading it is also a bad idea.

The worse code gets, the more likely it becomes that you're better off testing the code than reading it. Bad code uses variable names which are incomprehensible; really bad code uses variable names which are inaccurate; truly awful code uses indecipherable variable names to describe an inaccurate model of the business logic.

When it comes time to test bad code, you have to operate on the assumption that whoever wrote it didn't really understand what they were writing. I have seen better programmers than me implementing call/cc while calling it event bubbling, or totally missing the fact that they implemented a Factory pattern when they thought they were cleverly hacking an inheritance tree. I have also seen much worse programmers than me referring to "length" as "lenght" every single time in a code base, not just in comments and variable names, but in class names as well. A Factory pattern does not resemble OOP inheritance, and "lenght" is not the correct spelling of any word in any language I'm aware of.

One easy method of burning out, as a programmer, is reading bad code when you could be testing it. It pollutes your brain and gives you extra work just figuring out the difference between the bad code's terminology and the actual business logic. It is especially frustrating because the work does not pay off. Filling your brain with the bad code's terminology rarely helps you understand anything, and always constitutes unnecessary mental overhead.

It is a wasteful expenditure. You'll be happier if you start with tests or specs written in extremely simple language. Use these to define exactly what the system does.

Once you have good code which defines the system, it starts making sense to read that code. And once you've read a few solid descriptions of what the code actually does, take aim at the naming and see how it clears things up.

Where you take the specs after this point is a matter of style, but the way you get there is not just with specifications, but also with specifics. It is always better to find out what the system really does, and this goes double if you inherit the code from anybody else.

Code written by other people who understood the logic they were attempting to describe is a rare joy. But bad code written by other people, when it fails to accurately model the business domain, can then only be understood if you have (or write) some kind of glossary which tells you that when the bad code says (for instance) "OOP inheritance," it means "a Factory pattern."

Save yourself the stress.

By the way, that "lenght" thing is a true story. On this project, I persuaded the head of engineering (who was, in my personal opinion, an insane dipshit) to move automated testing from the lowest priority to the highest priority. I wrote integration tests, and I could literally run the test suite ten times and get different results every time. Eventually, as I cleaned the code up, patterns emerged in the test suite's unpredictable counts of which tests passed and which tests failed. It was literally like hacking a Feigenbaum sequence generator and modulating the variable which stands for robustness.

Bit of a tangent, I did that in high school on my graphing calculator.

A Feigenbaum sequence begins as an iterating equation, e.g.:



I don't actually remember what that means, but I copied it from either this book or this one and figured out the code to run it into my graphing calculator.

This is a graph of the Feigenbaum sequences for values of x and i between 0 and 1.



This is what I tried to draw on my graphing calculator, but its hardware had very little RAM, and its implementation of BASIC did not support lambdas, so I had to settle for drawing individual sequences, while modulating by hand the values of x and i.

An individual sequence could look like this:



If it looks like this, repeating the iterating function with those particular values for x and i zeroes in on one consistent return value.

Or this:



If it looks like this, repeating the iterating function with those particular values for x and i produces an oscillation through multiple return values.

Or this:



If it looks like this, repeating the iterating function with those particular values for x and i produces a very noisy and unpredictable range of values which appears random, but which is more accurately understood as chaotic, which in mathematical terms stands for situations where apparently random data results from deterministic input.

When I started, the tests I had for the legacy code I brought under control would, as I said, give me different results every time I ran them. If I had graphed the passing or failure of any given test x against the number of times I ran it i, I would have gotten a graph that looked like this:



Once I got the code under control, a predictable rhythm started to emerge in the mapping of any given test's success or failure x against the number of times I ran it i:



As I refactored the code, I got closer and closer to a situation where the spec's passage or failure x mapped against the number of times I ran it i would converge on a single value -- either passing or failing every time -- like this:



You might think it impractical to write a Feigenbaum sequence generator in BASIC, but a) I had a lot of free time in my classes, because I never listened to my teachers, and b) it turned out to be a useful model for fixing extremely unpredictable software.

That's right, motherfuckers. When you're this good, you don't have to make sense.

Entrepreneurs Are Not Necessarily Aristocrats

Republican economic dogma in a nutshell: "In the land of pioneers and self-reliance, the only way we can encourage entrepreneurship is by making sure the ultra-rich can make wildly speculative investments."

Exhibit A

Exhibit B

Exhibit C

Tuesday, October 16, 2012

Stop Breaking The Back Button


Undoubtedly overkill, but the time for sacrificing basic usability to personal cleverness was the 1990s, when the Web was new and its user experience fundamentals were unknown enough to justify experimenting. Breaking the back button two decades later is just shameful, especially now that we have the HTML5 History API.

Sunday, October 14, 2012

A Downside To Crowdsourced "Journalism"






Yes, I'm talking about antirez making a fool of himself and all the resulting noise on Twitter.

Saturday, October 13, 2012

A Refactoring Opportunity Within Rails 3


Rails 3 contains a textbook example of the need for a Replace Method With Method Object refactoring.

Consider this question:

I'm wondering what is the difference between these two methods: ActionView::Helpers::UrlHelper.url_for and ActionController::UrlWriter.url_for?

In addition to two versions of the same method, similar but not identical, the documentation for the ActionView::Helpers method link_to states that link_to accepts the same options which the ActionView::Helpers version of url_for accepts.

(Can you believe newbies find this confusing? What a bunch of morons.)

Anyway, the difference between these two methods with the same name is that the ActionView::Helpers version of url_for accepts a subset of the options which the ActionController::UrlWriter accepts. link_to also accepts that same unnamed subset.

If only there were a mechanism for capturing this pattern of highly similar methods, where one method's possible parameters are a subset of the other method's possible parameters. I can't imagine how such a mechanism might operate, or what it might be called, were it to exist.

Luckily we can discover it by applying the refactoring I mentioned earlier:

Turn the method into its own object so that all the local variables become fields on that object. You can then decompose the method into other methods on the same object.

In other words, whenever the code requires a set of options in more than one place, you can make the code more concise by capturing that set of options in an object. You could, for example, name the object Url or (if you have a fondness for Legend Of Zelda games) Link.

Apologies to the entire open source community, because I'm raising this on my blog rather than participating in the open source development process around Rails, especially since it's entirely possible this is already fixed in Rails 4. However, I've often gotten the impression that specific key members of that process were unpleasant to interact with for me personally, and I'm not interested in participating in a process which requires soliciting their approval. I do however hope that my insight here is useful to someone, somewhere.

Trademarks used here are the property of their respective owners, and appear without permission, but with the full, total, and obvious protection of fair use doctrine in trademark law. The web comic containing these trademarks operates as social commentary on open source culture, represents my opinions and subjective impressions only, operates partly as overstatement for the sake of contrast, should be interpreted with a certain degree of irony, and makes absolutely no claims of factual or historical accuracy (or indeed inaccuracy) whatsoever.

The Dark Side Of Trolling

Anyone who enjoys stirring up trouble on the Internet should give this a read:

When it comes to mods, the political model of Reddit is not so much a vast digital democracy, as it's often framed by fans and users, as online feudalism. Moderators like Violentacrez are given absolute control over their turf in exchange for keeping the kingdom of Reddit strong. Moderators become more or less powerful in direct relation to the number and popularity of the subreddits they moderate, so they try to take over other subreddits to boost their profile in the community. Inevitably, Reddit's administrators develop relationships with the most influential moderators. Like feuding medieval lords vying for the king's favor, moderators form alliances or wage epic flame wars over power struggles.

This is how Violentacrez, Reddit's creepiest user, also became its most powerful.


My rule for trolling is that it has to deliver insights or entertainment equal to or greater than the attention it grabs. Of course, it's impossible to calculate that with actual precision, and there's another important rule: don't be evil. I think there are a lot of trolls who've never considered this one.

Thursday, October 11, 2012

Why Does Twitter Suck? Believe It Or Not, It's The Fault Of Republicans

Twitter's deteriorating a great deal these days, both in terms of its UI/UX and in terms of the misguided choices it makes in defining its business model.

"I think of the company as a technology company that is in the media business," [Twitter's CEO] told a room full of editors and reporters. "Our business is an advertising business, we don't sell technology."

This is driven by the economics of venture capital; a modest success, in terms of starting a software business, is an abject failure in VC terms. Thus Twitter never even condescended to compete with the profitable, and customer-charging, but otherwise highly similar business Yammer, opting instead to aim higher, in financial terms, and aspire to becoming Crappy Television 2.0. An enterprise service business which sold for $1.2 billion is just not worth it from a VC point of view.

And why does social media revolve around startups defined under VC terms?

Because Republican policies over the past few decades have overwhelmingly favored the superwealthy, specifically with capital gains tax law, which makes it inevitable that most people with very large amounts of money will invest it in risk-seeking, speculative ways.

This is also why it's incredibly easy to find work if you build tech startups, but very difficult otherwise: Republican economic policies have inflated the profits of risk-seeking, speculative investment so much that it's now a very over-represented form of investment.

In a different economic environment, Twitter -- which began as a very cheap project -- might have made different strategic decisions. If the playing field was not so steeply tilted in favor of capital gains, the more sober economic environment would favor smaller investments, and products which delivered more immediate economic impact.

Twitter sucks because of the Republicans.

Tuesday, October 9, 2012

When Pixar Makes A Horror Movie, The World Will Be A Better Place

I want to see Pixar make a horror movie, but it will be a long time before that ever happens. Here in America, we assume any animated film is for children. This horrible belief not only blocks me from my Pixar horror movie, it also marginalizes anime.

I hate this so much I want to destroy it. It is an insult to artists everywhere: "Your drawings have no power to terrify. Your drawings have no power to touch hearts or provoke thoughts. Your drawings can only amuse children."

I believe there is a link between this horrible cultural norm and another, equally horrible American cultural norm: anti-intellectualism.

Compare two images of a Baltimore oriole:





One of these images presents a literal recording of an oriole; another presents an idea of an oriole.

One of these images could appear in a serious American film intended for adults. One of them could not.

Our cultural norm, that animated films are for children, sends a message: "A literal recording can be a serious thing, but an idea can only ever be trivial and harmless."

This is, ironically, a very harmful idea.

You Fuckers Are Adorable












Monday, October 8, 2012

I Think This Would Be Better

Exhibit A



Exhibit B



(About 2.5 minutes in.)

Exhibit C


Stylization And Abbreviation In Asian Cinema (And Elsewhere)

Kung fu movies use editing and cinematography which is stylized and abbreviated, rather than literal. This is especially evident in fight scenes.

Anime uses character designs which are stylized and abbreviated, rather than literal.

I don't see that a lot in Western film (by which I mean all Western film, but in practice, mostly French, American, and British film). I see it a lot in Western comics, video games, and user interface design, but not in film.

Thursday, October 4, 2012

How Do I Find Out If rake db:drop Failed?

bundle exec rake db:drop 2>&1 >/dev/null | grep 'skipping.'

Monday, October 1, 2012

LSD Effect In Experimental Video

I threw this together as an experiment, and I'm happy how it turned out. Thanks of course are due the actress in this, Kristin McCoy.

Thinking Out Loud: Clients For Twitter And For GitHub Notifications

Twitter's too useful not to use at all, but I can't use it as intended, either. I get a lot of unanticipated communication on Twitter. I also get a lot of unwanted communication on Twitter. The first major problem of Twitter usability is differentiating unsolicited communication from unwanted communication.

A few years ago I came up with a plan for fixing this: a Twitter summarizer which, in addition to screening out any tweets from people who I don't follow, would also recognize similarity within tweets and summarize highly similar tweets, making each one optionally visible but hidden by default.

For instance, if 100 people you follow all tweet a link to the same blog post, all you really need to know is the link and how many people retweeted it. You might also want to know some of the specific individuals in that crowd, but you absolutely don't need to see that same link 100 separate times. This feature would be even more useful in eliminating the annoyance inherent to seeing the same joke or witticism retweeted or rephrased countless times.

That feature is definitely worth building. But banning tweets from people you don't follow -- that's a very blunt instrument. A superior alternative might be segregating people you don't follow in a kind of quarantine, or subjecting them to negativity screening -- i.e., filter those tweets through sentiment analysis, and only show tweets from strangers when those tweets meet or exceed minimum levels of politeness or positivity.

When your friends are mad at you, you need to know about it. When some random Internet dickbag has a grudge against you, the absolute most information you might need is a temperature-like ambient Internet hate meter.



If the hate gets truly severe, you might want to take a peek to find out what people are so worked up about, but nine times out of ten it's not worth any attention at all.

Basically, most people who design Twitter clients work on the assumption that they're building windows onto a stream of data you want to watch. I think the best way to design a Twitter client is to build a robot receptionist.

Much of the above applies to GitHub notifications, but there are special considerations for GitHub. First of all, if I'm cc:ed in a GitHub notification, that notification does not currently receive any special highlighting within GitHub's UI. I have many many times missed GitHub notifications intended for me personally due to the overwhelming volume of other GitHub notifications in the same project.

Second, GitHub allows you to turn off notifications by project, but this is a coarse-grained solution. I'm guessing plenty of people out there need to know every single thing about one branch of a project and nothing at all, ever, about another branch of a project. You have access to project, branch, message text, filenames, dates, and languages; if I'm only interested in JavaScript notifications for a given project, or if I'm only interested in notifications on X branch but not Y branch, or I want to see messages which mention me by name in a special prioritized box, this all should be trivial information to obtain, and trivial UI to implement.

Imagine for the sake of argument a GitHub notifications API, with all this data contained in JSON objects. You hit the API, you pull your notifications, and then your lovely little robot butler reads through all these notifications, searching for indicators that you are likely to give a shit.

Perhaps it's difficult to accept this, but the appropriate response to nearly all Internet communication is "fuck off and stop bothering me." This is an unpleasant thing to say in real life, which is why you should instead have software doing it for you.

In every form of Internet communication, the number of messages you give a shit about is always much smaller than the total number of messages you receive. Software design for messaging clients virtually never acknowledges this fundamental and consistent reality.

The cult of inbox zero is a ship of fools. It is the information-age equivalent of a slave religion, where you glorify the most obedient slave to an insane master. You should not get a high five and a merit badge every time you get to a state where you can calmly and intelligently choose what to do next; being able to calmly and intelligently choose what to do next should be your default state.

People really need to design messaging systems around the obvious reality that give-a-shit is a precious and rare treasure. For some insane reason, this is not what we do; most software is designed with the utterly bizarre assumption that all incoming communication receives a standard, uniform, and equal subdivision of give-a-shit. This is why your email inbox looks like Hacker News, instead of Hacker Newspaper.



The newspaper model of information design uses typography both to maximize legibility but also, and more importantly, to communicate hierarchy.



It's very, very easy to infer from this newspaper design the relative priority of the stories it presents. Twitter, GitHub notifications, and email should all look like this.



Update: the option to route different organization's notifications to different email addresses in the new GitHub notifications system is definitely awesome.