Internet Trends 2013

Apresentação de Mary Meeker sobre as tendências de uso da internet. Interessante o uso da rede em relação às mídias tradicionais. Na China, as pessoas passam mais tempo navegando em páginas da web do que assistindo TV. A demanda de profissionais de TI dobrará na próxima década.

 


slide-9-1024

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

slide-88-1024

 

“Software engineering doesn’t work”, de Glenn Vanderburg

Vale a pena dar uma olhada na apresentação de Glenn Vanderburg na última Lone Star Ruby Conference. Apesar do título provocativo, a palestra mostra uma visão bem prática sobre como métodos ágeis podem ser utilizados para produzir software com qualidade e sem estourar o orçamento. O vídeo da apresentação (em inglês) pode ser acessado clicando aqui.

60% Of Apps In Android Market Are Free (Vs. 30% Or Less In Other App Stores)

O post abaixo, publicado no TechCrunch, explica parte de minha decisão em trocar o meu velho (e bom) HTC S621 por um Samsung Galaxy rodando Android. A lista de aplicações disponíveis no Android Market é imensa, e a maioria é gratuíta.

————————————————————–

App store analytics provider Distimo yesterday published its latest report, once again zooming in on the pricing of mobile applications across a variety of platforms.

Consistent with its previous findings, Google’s Android Market has by far the largest share of free applications available compared to other mobile app store, but the gap is also widening.

In July 2010, 60% of all applications on Android Market were free of charge, representing an increase of 3% since May 2010 when it was 57%.

As you can tell from the graph below, that share is more than double the share of free apps on other mobile app stores, with the exception of Palm’s App Catalog (albeit barely).

The share of free applications is smallest on Windows Marketplace for Mobile (22%), followed by the Apple App Store for iPad (26%) and RIM’s BlackBerry App World (26%).

(click image for full size)

Let’s take a closer look at the prices of paid apps across mobile application stores.

Distimo posits that the average price of the 100 most popular apps in Android Market and Palm’s App Catalog is higher than the average price of the entire catalogue of applications.

While the average price of all applications is only 16% higher in the Apple App Store for iPad than in the App Store for iPhone, the average price of the 100 most popular applications is nearly three times as high in the former.

More than 60% of applications are priced below or equal to $2 in the App Store for iPhone, Android Market, Nokia’s Ovi Store and Palm’s App Catalog. The proportion of applications priced
below or equal to $2 is much lower in the App Store for iPad and Windows Marketplace for Mobile.

Notably, it seems prices of apps for iOS devices are on the rise. The proportion of paid applications priced below $1 on the Apple App Store for iPad and Apple App Store for iPhone has decreased in both in July 2010, from 30% to 25% and from 49% to 45%, respectively.

(click image for full size)

(click image for full size)

Who needs math skills?

Após quase um mês de aulas no doutorado do CIN, percebi o quanto estou enferrujado em alguns princípios básicos de cálculo. Durante a minha graduação cursei 4 disciplinas de cálculo, 3 de álgebra e 1 de estatística. Mesmo assim, senti uma dificuldade enorme em acompanhar as primeiras aulas – em grande parte, pelo tempo que passou entre a graduação e o meu novo contato com o mundo dos números. Mas o principal motivo foi o simples fato que nunca precisei utilizar esses conhecimentos na minha carreira como analista de sistemas. Esse assunto veio à tona quando li o post de Alan Skorkin falando exatamente sobre os problemas que estou enfrentando agora. Vale a pena dar uma olhada.

———————————————————-

You Don’t Need Math Skills To Be A Good Developer But You Do Need Them To Be A Great One

A little while ago I started thinking about math. You see, I’ve been writing software for quite a few years now and to be totally honest, I haven’t yet found a need for math in my work. There has been plenty of new stuff I’ve had to learn/master, languages, frameworks, tools, processes, communication skills and library upon library of stuff to do just about anything you can think of; math hasn’t been useful for any of it. Of course this is not surprising, the vast majority of the work I’ve been doing has been CRUD in one form or another, that’s the vast majority of the work most developers do in these interweb times of ours. You do consulting – you mostly build websites, you work for a large corporates – mostly build websites, you freelance – you mostly build websites. I am well aware that I am generalising quite a bit, but do bear with me, I am going somewhere.

Eventually you get a little tired of it, as I did. Don’t get me wrong it can be fun and challenging work, providing opportunities to solve problems and interact with interesting people – I am happy to do it during work hours. But the thought of building yet more websites in my personal time has somewhat lost its luster – you begin to look for something more interesting/cool/fun, as – once again – I did. Some people gravitate to front-end technologies and graphical things – visual feedback is seductive – I was not one of them (I love a nice front-end as much as the next guy, but it doesn’t really excite me), which is why, when I was confronted with some search-related problems I decided to dig a little further. And this brings me back to the start of this story because as soon as I grabbed the first metaphorical shovel-full of search, I ran smack-bang into some math and realized exactly just how far my skills have deteriorated. Unlike riding a bike – you certainly do forget (although I haven’t ridden a bike in years so maybe you forget that too :) ).

Broadening Horizons

Learning a little bit about search exposed me to all sorts of interesting software-y and computer science-y related things/problems (machine learning, natural language processing, algorithm analysis etc.) and now everywhere I turn I see math and so feel my lack of skills all the more keenly. I’ve come to the realization that you need a decent level of math skill if you want to do cool and interesting things with computers. Here are some more in addition to the ones I already mentioned – cryptography, games AI, compression, genetic algorithms, 3d graphics etc. You need math to understand the theory behind these fields which you can then apply if you want to write those libraries and tools that I was talking about – rather than just use them (be a producer rather than just a consumer – to borrow an OS metaphor :) ). And even if you don’t want to write any libraries, it makes for a much more satisfying time building software, when you really understand what makes things tick, rather than just plugging them in and hoping they do whatever the hell they’re supposed to.

The majority of developers will tell you that they’ve never needed math for their work (like I did a couple of paragraphs above :) ), but after musing on it for a while, I had a though. What we might have here is a reverse Maslow’s hammer problem. You know the one – when you have a hammer, everything looks like a nail. It is a metaphor for using a favourite tool even when it may not be best for the job at hand. Math is our hammer in reverse. We know the hammer exists but don’t quite know how to use it, so even when we meet a problem where our hammer would be the perfect tool, we never give it serious consideration. The screwdriver was good enough for my granddaddy, it was good enough for my daddy and it is good enough for me, who needs a hammer anyway? The trick with math is – people are afraid of it – even most programmers, you’d think we wouldn’t be, but we are. So, we turn our words into a self-fulfilling prophecy. It’s not that I don’t need math for my work it’s just that I don’t really know it and even if I do, I don’t know how to apply it. So I get by without it and when you make-do without something for long enough, after a while you don’t even notice it’s missing and so need it even less – self-fulfilling prophecy.

Here is some food for thought about something close to all our hearts – learning new skills. As a developer in the corporate world, you strive to be a generalizing specialist (read this book if you don’t know what I am talking about). You try to be decent at most things and really good at some. But what do you specialize in? Normally people choose a framework or two and a programming language and go with that, which is fine and worthwhile. But consider the fact that frameworks and to a lesser extent languages have a limited shelf life. If you’re building a career on being a Hibernate, Rails or Struts expert (the struts guys should really be getting worried now :) ), you will have to rinse and repeat all over again in a few years when new frameworks come along to supersede the current flavour of the month. So is it really the best investment of your time – maybe, but then again maybe not. Math, on the other hand is not going away any time soon. Everything we do in our field is built upon solid mathematical principles at its root (algorithms and data structures being a case in point), so time spent keeping up your math skills is arguably never wasted. And it, once again, comes down to really understanding something rather than just using it by rote – math can help you understand everything you do more deeply, when it comes to computers. Infact, as Steve Yegge said, what we do as programmers is so much like math we don’t even realise it.

What/Who Makes A Difference

Knuth

You don’t believe me, then consider this. Most of the people who are almost universally respected  in our field as great programmers are also great mathematicians. I am talking people like Donald Knuth, Edsger W. Dijkstra, Noam Chomsky, Peter Norvig. But then again these guys weren’t really developers, they were computer scientists, so it doesn’t really count right? I guess, but then again, maybe we shouldn’t really talk until our output in pure lines of code even begins to approach 10% of what these people have produced. Of course, you can be successful and famous without being a boffin, everyone has heard of Gavin King or DHH. That’s kinda true (although it’s an arguable point whether or not many people have heard of Gavin or DHH outside their respective niches), but “heard of” and universally respected are different things, about as different as creating a framework and significantly advancing the sum-total of human knowledge in your field (don’t get me wrong, I respect Gavin And David, they’ve done a hell of a lot more than I have, but that doesn’t make what I said any less of a fact). How is all of this relevant? I dunno, it probably isn’t, but I thought I’d throw it in there anyway since we’re being introspective and all.

The world is getting filled up with data, there is more and more of it every day and whereas before we had the luxury of working with relatively small sets of it, these days the software we write must operate efficiently with enormous data sets. This is increasingly true even in the corporate world. What this means is that you will be less and less likely to be able to just “kick things off” to see how they run, because with the amount of data you’ll be dealing with it will just grind to a halt unless you’re smart about it. My prediction is that algorithm analysis will become increasingly important for the lay-programmer, not that it wasn’t before, but it will become more so. And what do you need to be a decent algorist – you guessed it, some math skills.

So, what about me? Well, I’ve decided to build up/revive my math skills a little bit at a time, there are still plenty of books to read and code to write, but I will try to devote a little bit of my time to math at least once in a while, because like exercise, a little bit once in a while, is better than nothing (to quote Steve Yegge yet again). Of course I have a bit of an ace up my sleeve when it comes to math, which is good for me, but luckily with this blog, we might all benefit (I know you’re curious, I’ll tell you about it soon :) ).

Where Do You See Yourself In 5 Years

Wakeboarding

So, is all this math gonna be good for anything? It’s hard to say in advance, I am pretty happy with where I am at right now and so might you be, but it’s all about potential. End of the day, if you’re a developer in the corporate world you don’t really need any math. If you’re happy to go your entire career doing enterprise CRUD apps during work hours and paragliding or wakeboarding (or whatever trendy ‘sport’ the geeky in-crowd is into these days) during your off time then by all means, invest some more time into Spring or Hibernate or Visual Studio or whatever. It will not really limit your potential in that particular niche; you can become extremely valuable – even sought after. But if you strive for diversity in your career and want to have the ability to try your hand at almost any activity that involves code, from information retrieval to Linux kernel hacking. In short if you want to be a perfect mix of developer, programmer and computer scientist, you have to make sure you math skills are up to scratch (and hell, you can still go wakeboarding if you really want :) ). Long story short, if you grok math, there are no doors that are closed to you in the software development field, if you don’t – it’s going to be all CRUD (pun intended)!

NoSQL: O fim dos bancos de dados relacionais?

O Digg é mais um grande nome da Web 2.0 que acaba de migrar os seus (gigantescos) conjuntos de dados do mundo relacional para o modelo “pós-relacional”, esse último conhecido como NoSQL. Eles se juntaram à empresas como Google, Amazon, e-Bay, LinkedIn, Twitter e Facebook, com o objetivo de prover níveis de performance mais adequados para as consultas realizadas em bases de dados monstruosas, típicas de aplicações da Web 2.0.  Para ter idéia do problema, imagine uma consulta na base de 2 PB (dois Petabytes) do e-Bay sendo feita online por um conjunto de centenas ou milhares de usuários simultaneamente. Imagine agora fazer um join com essa quantidade toda de linhas em tabelas de um banco relacional.

A estratégia do Digg está descrita em dois posts  no blog do serviço. O primeiro fala  sobre as dificuldades de escalabilidade da infraestrutura de banco de dados, baseada em uma solução mestre-escravo particionada um servidor MySQL. O texto mostra um exemplo de um a consulta com join que levava 14 segundos para ser completada. Após a migração para um datastore não-relacional, baseado no modelo distribuído do Cassandra, a mesma consulta pôde ser realizada em menos de um segundo. O segundo texto fala sobre a migração completa dos principais serviços do Digg usando o Cassandra e ainda lista as principais contribuições da equipe de desenvolvimento para o projeto.

O Futuro de Java

Interessante post de Eric Bruno sobre o futuro de Java como plataforma e linguagem de programação. O post original pode ser visto aqui. Um dado curioso extraído do texto é sobre a quantidade de linguagens dinâmicas que atualmente utilizam a JVM para rodar. A lista inclui Rexx, Ruby, JavaScript, Python, PHP, Groovy, Clojure e Scala.A justificativa para esse movimento é a melhora sensível na execução das aplicações através de recursos como os compiladores Just-In-Time da plataforma.

Apesar de muita gente nos últimos meses ter começado a fuçar outras linguagens em substituição a Java (eu me incluo nesse meio), a linguagem ainda é a mais utilizada pelos desenvolvedores, com mais de nove milhões deles construindo aplicações para as mais variadas plataformas e dispositivos. Um fato que assustou a comunidade Java foi a recente compra da Sun pela Oracle e as incertezas no futuro da plataforma geradas pela aquisição. Os caras da Oracle se comprometeram a manter o Java Community Process como o fórum de discussão e evolução da plataforma e adiantaram ainda que a nova versão (chamada provisoriamente de Java 7) trará algumas mudanças significativas, incluindo a modularização da JVM, o suporte nativo à outras linguagens e melhorias no suporte ao processamento com vários núcleos. A integração da JVM com o JRockit da BEA levará a criação de um novo mecanismo de garbage collection, trazendo alguns benefícios adicionais de performance para as aplicações.

Eric também prevê que o sofrimento atual dos desenvolvedores de aplicações para web na criação de interfaces com o usuário, deverá  ser amenizado quando ocorrer a evolução natural das novas versões de JavaFX. Confesso que não me animei muito em estudar essa nova API, e atualmente estou estudando e desenvolvendo uma pequena aplicação com o Vaadin, um framework criado como uma extensão dos componentes de GWT. Assim que a aplicação entrar no ar, eu divulgarei a URL para vocês acessarem.

Os 25 erros de programação mais perigosos

Na minha jornada para estudar outras linguagens de programação (para sair um pouco do mundo de Java), comecei a estudar os capítulos relacionados à segurança em Scala e Ruby on Rails. Para a primeira, eu recomendo uma lida no capítulo 3 do livro “The Definitive Guide to Lift: A Scala-based Web Framework“, voltado para construção de aplicações web usando Scala e o framework Lift. Para a segunda existe um livro inteiro dedicado ao assunto: “Security on Rails“. Mas antes de ler esses capítulos, me chamou atenção o artigo publicado pela Common Weakness Enumeration intutulado “2010 CWE/SANS Top 25 Most Dangerous Programming Errors”. Esse trabalho mostra um resumo muito bem estruturado dos principais erros de programação que tornam vulnerável uma aplicação web. A leitura do artigo é mais rápida que um livro inteiro sobre segurança e pode ser usada com um check-list para sua aplicação.

O artigo completo pode ser visto aqui. Vale a pena dar uma olhada, independente da linguagem de programação que você esteja usando.

2010 CWE/SANS Top 25 Most Dangerous Programming Errors

“Software Engineering ≠ Computer Science”, de Chuck Connell

Um excelente post de Chuck Connel na Dr. Dobb’s, defendendo a tese que a Engenharia de Software (ES) não faz parte da Ciência da Computação (!). Achei muito interessante o ponto de vista dele, mostrando que a ES não precisa ter um rigor matemático em suas atividades, basicamente, porque software é feito com “criatividade, visão, pensamento multidisciplinar e humanidade”. Isso me lembra alguns colegas de trabalho em um passado recente, tentando provar por fim da força que Análise por Pontos de Função é algo confiável :-). O trecho onde ele fala que essas técnicas de estimativas são meramente subjetivas, devido aos fatores humanos presentes em sua formulação, é um excelente argumento na defesa de metodologias mais ágeis para tentar prever os diversos aspectos da construção de software.

Veja abaixo o post de Connell. O

—————————————–

“A few years ago, I studied algorithms and complexity. The field is wonderfully clean, with each concept clearly defined, and each result building on earlier proofs. When you learn a fact in this area, you can take it to the bank, since mathematics would have to be inconsistent to overturn what you just learned. Even the imperfect results, such as approximation and probabilistic algorithms, have rigorous analyses about their imperfections. Other disciplines of computer science, such as network topology and cryptography also enjoy similar satisfying status.

Now I work on software engineering, and this area is maddeningly slippery. No concept is precisely defined. Results are qualified with “usually” or “in general”. Today’s research may, or may not, help tomorrow’s work. New approaches often overturn earlier methods, with the new approaches burning brightly for a while and then falling out of fashion as their limitations emerge. We believed that structured programming was the answer. Then we put faith in fourth-generation languages, then object-oriented methods, then extreme programming, and now maybe open source.

But software engineering is where the rubber meets the road. Few people care whether P equals NP just for the beauty of the question. The computer field is about doing things with computers. This means writing software to solve human problems, and running that software on real machines. By the Church-Turing Thesis, all computer hardware is essentially equivalent. So while new machine architectures are cool, the real limiting challenge in computer science is the problem of creating software. We need software that can be put together in a reasonable amount of time, for a reasonable cost, that works something like its designers hoped for, and runs with few errors.

With this goal in mind, something has always bothered me (and many other researchers): Why can’t software engineering have more rigorous results, like the other parts of computer science? To state the question another way, “How much of software design and construction can be made formal and provable?” The answer to that question lies in Figure 1.

Figure 1: The bright line in computer science

The topics above the line constitute software engineering. The areas of study below the line are the core subjects of computer science. These latter topics have clear, formal results. For open questions in these fields, we expect that new results will also be formally stated. These topics build on each other — cryptography on complexity, and compilers on algorithms, for example. Moreover, we believe that proven results in these fields will still be true 100 years from now.

So what is that bright line, and why are none of the software engineering topics below it? The line is the property “directly involves human activity”. Software engineering has this property, while traditional computer science does not. The results from disciplines below the line might be used by people, but their results are not directly affected by people.

Software engineering has an essential human component. Software maintainability, for example, is the ability of people to understand, find, and repair defects in a software system. The maintainability of software may be influenced by some formal notions of computer science — perhaps the cyclomatic complexity of the software’s control graph. But maintainability crucially involves humans, and their ability to grasp the meaning and intention of source code. The question of whether a particular software system is highly maintainable cannot be answered just by mechanically examining the software.

The same is true for safety. Researchers have used some formal methods to learn about a software system’s impact on people’s health and property. But no discussion of software safety is complete without appeal to the human component of the system under examination. Likewise for requirements engineering. We can devise all sorts of interview techniques to elicit accurate requirements from software stakeholders, and we can create various systems of notation to write down what we learn. But no amount of research in this area will change the fact that requirement gathering often involves talking to or observing people. Sometimes these people tell us the right information, and sometimes they don’t. Sometimes people lie, perhaps for good reasons. Sometimes people are honestly trying to convey correct information but are unable to do so.

This observation leads to Connell’s Thesis:

Software engineering will never be a rigorous discipline with proven results, because it involves human activity.

This is an extra-mathematical statement, about the limits of formal systems. I offer no proof for the statement, and no proof that there is no proof. But the fact remains that the central questions of software engineering are human concerns:

  • What should this software do? (requirements, usability, safety)
  • What should the software look like inside, so it is easy to fix and modify? (architecture, design, scalability, portability, extensibility)
  • How long will it take to create? (estimation)
  • How should we build it? (coding, testing, measurement, configuration)
  • How should we organize the team to work efficiently? (management, process, documentation)

All of these problems revolve around people.

My thesis explains why software engineering is so hard and so slippery. Tried-and-true methods that work for one team of programmers do not work for other teams. Exhaustive analysis of past programming projects may not produce a good estimation for the next. Revolutionary software development tools each help incrementally and then fail to live up to their grand promise. The reason is that humans are squishy and frustrating and unpredictable.

Before turning to the implications of my assertion, I address three likely objections:

The thesis is self-fulfilling. If some area of software engineering is solved rigorously, you can just redefine software engineering not to include that problem.

This objection is somewhat true, but of limited scope. I am asserting that the range of disciplines commonly referred to as software engineering will substantially continue to defy rigorous solution. Narrow aspects of some of the problems might succumb to a formal approach, but I claim this success will be just at the fringes of the central software engineering issues.

Statistical results in software engineering already disprove the thesis.

These methods generally address the estimation problem and include Function Point Counting, COCOMO II, PROBE, and others. Despite their mathematical appearance, these methods are not proofs or formal results. The statistics are an attempt to quantify subjective human experience on past software projects, and then extrapolate from that data to future projects. This works sometimes. But the seemingly rigorous formulas in these schemes are, in effect, putting lipstick on a pig, to use a contemporary idiom. For example, one of the formulas in COCOMO II is PersonMonths = 2.94 × SizeB, where B = 0.91 + 0.01 × Σ SFi, and SF is a set of five subjective scale factors such as “development flexibility” and “team cohesion”. The formula looks rigorous, but is dominated by an exponent made up of human factors.

Formal software engineering processes, such as cleanroom engineering, are gradually finding rigorous, provable methods for software development. They are raising the bright line to subsume previously squishy software engineering topics.

It is true that researchers of formal processes are making headway on various problems. But they are guilty of the converse of the first objection: they define software development in such a narrow way that it becomes amenable to rigorous solutions. Formal methods simply gloss over any problem centered on human beings. For example, a key to formal software development methods is the creation of a rigorous, unambiguous software specification. The specification is then used to drive (and prove) the later phases of the development process. A formal method may indeed contain an unambiguous semantic notation scheme. But no formal method contains an exact recipe for getting people to unambiguously state their vague notions of what software ought to do.

To the contrary of these objections, it is my claim that software engineering is essentially different from traditional, formal computer science. The former depends on people and the latter does not. This leads to Connell’s Corollary:

We should stop trying to prove fundamental results in software engineering and accept that the significant advances in this domain will be general guidelines.

As an example, David Parnas wrote a wonderful paper in 1972, On The Criteria To Be Used in Decomposing Systems into Modules. The paper describes a simple experiment Parnas performed about alternative software design strategies, one utilizing information hiding, and the other with global data visibility. He then drew some conclusions and made recommendations based on this small experiment. Nothing in the paper is proven, and Parnas does not claim that anyone following his recommendations is guaranteed to get similar results. But the paper contains wise counsel and has been highly influential in the popularity of object-oriented language design.

Another example is the vast body of work known as CMMI from the Software Engineering Institute at Carnegie Mellon. CMMI began as a software process model and has now grown to encompass other kinds of projects as well. CMMI is about 1000 pages long — not counting primers, interpretations, and training materials — and represents more than 1000 person-years of work. It is used by many large organizations and has been credited with significant improvement in their software process and products. But CMMI contains not a single iron-clad proven result. It is really just a set of (highly developed) suggestions for how to organize a software project, based on methods that have worked for other organizations on past projects. In fact, the SEI states that CMMI is not even a process, but rather a meta-process, with details to be filled in by each organization.

Other areas of research in this spirit include design patterns, architectural styles, refactoring based on bad smells, agile development, and data visualization. In these disciplines, parts of the work may include proven results, but the overall aims are systems that foundationally include a human component. To be clear: Core computer science topics (below the bright line) are vital tools to any software engineer. A background in algorithms is important when designing high-performance application software. Queuing theory helps with the design of operating system kernels. Cleanroom engineering contains some methods useful in some situations. Statistical history can be helpful when planning similar projects with a similar team of people. But formalism is just a necessary, not sufficient, condition for good software engineering. To illustrate this point, consider the fields of structural engineering and physical architecture (houses and buildings).

Imagine a brilliant structural engineer who is the world’s expert on building materials, stress and strain, load distributions, wind shear, earthquake forces, etc. Architects in every country keep this person on their speed-dial for every design and construction project. Would this mythical structural engineer necessarily be good at designing the buildings he or she is analyzing? Not at all. Our structural engineer might be lousy at talking to clients, unable to design spaces that people like to inhabit, dull at imagining solutions to new problems, and boring aesthetically. Structural engineering is useful to physical architects, but is not enough for good design. Successful architecture includes creativity, vision, multi-disciplinary thinking, and humanity.

In the same way, classical computer science is helpful to software engineering, but will never be the whole story. Good software engineering also includes creativity, vision, multi-disciplinary thinking, and humanity. This observation frees software engineering researchers to spend time on what does succeed — building up a body of collected wisdom for future practitioners. We should not try to make software engineering into an extension of mathematically-based computer science. It won’t work, and can distract us from useful advances waiting to be discovered.”

“10 good reasons to look for something better than Java”, de Mario Fusco

Tentando fazer o papel de advogado do diabo, repasso o interessante post de Mario Fusco. No texto ele fala de 10 razões para começar a pensar em uma linguagem que substitua Java.

“Don’t get me wrong. During my professional life I have written tons of Java code and of course I think it is a great language still. For sure it has been a great improvement from C++ and Smalltalk. But now even Java is starting to feel the weight of its 15 years.

Indeed during my experience I had to face up with some mistakes, flaws and lacks in its design and specification that made my Java programmer life less pleasant. With millions of Java programmers and billions of lines of code out in the world, I am far to say that Java is going to be dead in the near future. Anyway after the rise of some JVM compatible languages (my favorite is Scala), these issues are becoming even less tolerable and I am starting thinking that it is time to slowly move away from Java (but not from the JVM). More in detail, in my opinion, the 10 most important problems of the Java language are:

1. Lack of closure: I don’t think I have to explain this. Functional programming exists since decades, but in the last years they are gaining more and more interests, mostly because it allows to write naturally parallelizable programs. I partially agree with Joshua Bloch that underlined the problems of introducing them in Java as a second thought (the BGGA proposal was truly awful), but anyway their lack makes impossible to have any kind of real functional programming in Java.

2. Lack of first class function: this issue is in some way related to the former one but I believe it is even worse. The only way to achieve a similar result in Java is by using the ugly and sadly famous one-method anonymous inner classes, but it looks actually a poor solution. Even in C# has been provided a better one by the implementation of the delegate mechanism.

3. Primitive types: it should be beautiful if everything in Java was an Object, but they didn’t design it in that way. That leaded to some issue, like the impossibility to have a Collection of int partially resolved in Java 5 through the autoboxing feature (see below). It also generated some confusion between passing by value and passing by reference. Indeed a primitive data type is passed to a method by value (a copy of the data type is duplicated, and passed to the function) while true objects are passed by reference.

4. Autoboxing and autounboxing: this feature has been introduced in Java 5 to overcome the problems caused by the presence of primitive types. It allows to silently convert a primitive type in the corresponding object, but often it is cause of other problems. For example an Integer can have null value, but the same doesn’t apply to int, so in this case when the Integer is changed in an int the JVM can’t do anything else than throw a difficult to debug NullPointerException. Moreover it is cause of other strange behavior like in the following example where it is not so easy to understand why the test variable is false:

Integer a = new Integer(1024);
Integer b = new Integer(1024);
boolean test = a < b || a == b || a > b;

5. Lack of generics reification: generics are one of the cool features introduced with Java 5, but in order to mantain the compatibility with the older version of java they miss some important characteristic. In particular it is not possible to introspect their generic type at runtime. For example if you have a method that accepts as parameter a List<?> and you pass to it a List<String> you are not allowed to know at runtime the actual type of the generic. For the same reason you cannot create array of generics. It means that despite it looks quite natural the following statement won’t compile:

List<String>[] listsOfStrings = new List<String>[3];

6. Unavoidable generics warnings: have you ever found yourself in the impossibility to get rid of a bothering warning about generics? If you make a large use of generics like me, I bet you did. And the fact that they felt the need to introduce a special annotation to manage this situation (@SuppressWarnings(“unchecked”)) is symptomatic of the dimension of this problem and, in my opinion, that generics could have been designed better.

7. Impossibility to pass a void to a method invocation: I admit that the need to pass a void to a method could look weird at a first glance. Anyway I like DSL and while implementing a special feature of my DSL library (lambdaj) I had the need to have a method with a simple signature like this: void doSomething(Object parameter) where the parameter passed to this method is the result of another method invocation done with the only purpose to register the invocation itself and execute it in the future. With my big surprise, and apparently without a good reason, since the println method returns void, I am not allowed to write something like this:

doSomething(System.out.println(“test”));

8. No native proxy mechanism: proxy is a very powerful and widely used pattern, but Java offers a mechanism to proxy only interfaces and not concrete classes. This is why a library that provide this feature like cglib is employed in so many main stream frameworks like Spring and Hibernate. Moreover cglib implements this feature by creating at runtime a Class that extends the proxied one, so this approach has a well known limitation in the impossibility to extend and then proxy a final Class like String.

9. Poor switch … case statement: the switch … case as specified in java allows to switch only on int and (starting from java 5) enum. That looks extremely few powerful especially if compared with what offered by a more modern language like Scala.

10. Checked exception: like primitive types, checked exception have been one of the original sins of Java. They oblige programmers to do one of the following two equally horrible things: fill your code with tons of poorly readable and error prone try … catch statement where often the most meaningful thing to do is to wrap the catched exception in a runtime one and rethrow it; or blurring your API with lots of throws clause making them less flexible and extensible.

The real problem here is that the only way to fix the biggest part of the issues I mentioned is to take a painful decision and define a specification of the language that drops the backward compatibility with the current one. I guess they will never do that, even if I believe it should not be extremely difficult to write a program that allows to automatically translate the old Java sources in order to make them compatible with this new hypothetic release. And in the end, this is the reason why I decided to start looking for a better JVM compatible language.”

Esse post gerou uma boa discussão nos fórums do The Server Side, com um monte de gente se posicionando contra ou a favor. No meu ponto de vista, boa parte das fragilidades apontadas no texto podem ser resolvidas através de bibliotecas que estendem a linguagem. E falando francamente, não acho as alegações de Mario boas o suficiente para promover uma mudança de plataforma, principalmente quando levamos em consideração todo o legado construído na última década. Os problemas apontados por ele são reais, mas não impactam em nada na produtividade e extensibilidade das aplicações desenvolvidas em Java.

Uma das maiores reclamações da comunidade em torno de Java era que as ferramentas de desenvolvimento não seriam tão eficientes e produtivas como as encontradas para a plataforma .Net. Quem acompanhou as últimas versões do Eclipse e do Netbeans sabe que os recursos para construção de aplicações a partir de modelos ou templates já fazem parte do dia a dia de todo desenvolvedor. Para melhorar um pouco mais as coisas, novos frameworks surgem quase que diariamente com o objetivo de melhorar continuamente a maneira de desenvolver aplicações na linguagem (atualmente estou desenvolvendo meus projetos usando o Seam. Complexo e completo).

A discussão é boa, mas eu ainda pretendo investir algumas horas de meu tempo para melhorar as minhas habilidades como desenvolvedor Java. Mas como nada é eterno, não custa nada começar a olhar algumas coisas como Scala, Ruby on Rails, Grails e outras linguagens e frameworks da moda :-)

Comparação Joomla X WordPress

Muito interessante o post de Klaus Peter Laube sobre a sua experiência no uso do Joomla e do WordPress como ferramentas para publicar conteúdo na Web. No artigo, ele relata porque prefere o WP como solução de publicação, enfatizando a simplicidade da ferramenta. Por outro lado, a complexidade na customização dos componentes do Joomla e os problemas de segurança ocorridos no passado fazem dessa solução um opção para sites mais complexos.

Como eu utilizo as duas ferramentas, tenho uma opinião parecida com a de Klaus:

  • O Joomla foi feito para administrar o conteúdo de portais de tamanho médio, com uma curva de aprendizado razoável para customizar um site completo sem utilizar os famosos templates criados pela comunidade. O que me atrai mais nessa solução é a capacidade de agregar dezenas de componentes customizáveis desenvolvidos por seus milhares de desenvolvedores, além é claro, de uma farta documentação disponível (encontrei pelo 11 livros sobre Joomla na Oreilly).
  • O WP é uma ferramenta de blog…mas com centenas de plugins que podem ser utilizados para gerenciar conteúdo perfeitamente. Posso dizer que se o projeto de seu site comportar um processo iterativo e incremental, onde as versões possam ser entregues paulatinamente para os seus clientes, o WP é uma solução sem risco. Dá para ir agregando funcionalidade ao site com os plugins criados pela comunidade.

Apesar de não ser muito fã de PHP, utilizo essas duas ferramentas para gerenciar o conteúdo do meu blog, além de utilizá-la como ferramenta de LMS através do Moodle. A facilidade de criação, customização e publicação de conteúdo dessas ferramentas explicam a sua enorme popularidade entre os web designers. Por outro lado, penso que soluções mais robustas, do tipo “mission-critical” devam ter uma plataforma com o mesmo grau de robustez e confiabilidade. Nos últimos meses venho estudando várias soluções de CMS para a plataforma Java. Ao final, depois de testar mais de 15 soluções open-source, minhas escolhas ficaram restritas à duas ferramentas:

  • Jboss Portal: Talvez a ferramenta de CMS mais completa para a plataforma Java, conta com o apoio de uma empresa líder no mercado de software open source (Red Hat) e com uma comunidade ativa em torno dela. Possui farta documentação e está em constante atualização por parte de seus desenvolvedores. Possui uma vasta biblioteca de portlets e integração com as outras soluções Jboss (Jboss AS, Seam, Hibernate, jBPM, etc.). Um ponto positivo da solução é a disponibilidade de um plugin para o Eclipse, contendo todas as ferramentas para o desenvolvimentos de portlets nessa IDE. Minha grande crítica para o Jboss Portal é em relação aos requisitos para rodar a plataforma como um todo. Como ela está otimizada para utilizar o Jboss AS como servidor de aplicações nativo, a infraestrutura necessária requer muita memória do servidor e algum grau de tunning no ambiente para rodar sem gargalos.
  • Liferay Portal: Na minha opinião, a mais flexível das ferramentas de portais construídas para a plataforma Java. Da mesma forma que o Jboss Portal, conta com o suporte de uma empresa no seu desenvolvimento, que disponibiliza uma versão open source para a comunidade. Possui uma vasta gama de portlets desenvolvidos pela comunidade e permite a utilização de vários servidores de aplicações para rodar os projetos. O ponto positivo da ferramenta é a sua facilidade de instalação. É só baixar o pacote com o AS escolhido, descompactá-lo em uma pasta e rodar o script de inicialização. Por ser uma ferramenta mais robusta que os seus concorrentes escritos em PHP, o Liferay requer um servidor um pouco mais parrudo, sob o risco de enfrentar algum memory leak ao longo do caminho.

Apesar das duas soluções para o mundo Java possuírem portlets para o gerenciamento de blogs, confesso que nada é comparável à flexibilidade e a facilidade de uso do WordPress. Esse é um campo onde as soluções Java precisarão de algum tempo para produzir uma ferramenta que concorra com o bom e velho WP.

Blog de Jarley Nóbrega