Erlang/OTP is without any doubt a powerful tool. But as with any technology it's not good for any use case imaginable. It has a lot of strengths but some weaknesses too. Erlang glances in everything related to distribution and reliability. If you want to write software to be arbitrarily distributed over multiple instances, CPU cores, mashines or even data centers without having to change the programming paradigm for any particular case you probably have the perfect tool with Erlang. To benefit from Erlang you should have a problem best solved with a distributed solution. This may be reliable message passing or rock solid in memory storage but it is probably not web page generation and delivery.
In the Internet world, Erlang is perfect for middle ware or backend systems like caches, message queues and exchanges, databases or storage abstractions. But it is probably not the right tool to write web applications. Sure, you may add a HTTP endpoint to your message middle ware or database but should you use Erlang in a way one usually uses JSP or PHP or Ruby? Probably not. Web applications have a very short life cycle. The business rules are in a constant flow and have to be changed over and over again. To define such rules in Erlang may turn out to be a very hard job. Despite of its expressiveness Erlang is not the language to be used as an embedded language as for Yaws dynamic content or in ehtml. It works - but it is comparably hard to maintain and not that fast in terms of execution time.
It's not that Erlang is a functional language but it's a functional language with a special syntax. There are all those commas, semicolons and dots and a pretty verbose notation for associative arrays, the records. The benefits of the syntax as for example how you deal with binary data is of little use when programming the business rules of a web application. It's simply not designed for such a use case. It's designed for writing reliable software dealing with network communication and related stuff. But it's not made to express complex rules in a domain specific language.
On the other hand, the Erlang VM is highly optimized to spawn and execute a huge number of small processes but it's probably not optimized to execute a thread in the shortest possible time ever. Modern Java implementations may beat current Erlang easily. Sure, the maintainers of the Erlang VM do a lot of work improving both performance and SMP scalability but all those optimizations a not yet at the end. High performance is not the domain of the OTP.
When you ever think about using Erlang, the first question to ask should be whether your problem deals with either redundancy, scalability or distribution. Second is whether you not have to deal with high performance as well, and you should not have to deal with the low lifetime of business rules. If you answer these questions with yes you get a highly optimized and convenient tool for the job.
Cloud computing becomes more and more popular. And it's cool, indeed. With Amazon webservices such as EC2, SimpleDB, S3 etc. or Google App Engine it is possible to build scalable web applications easily. Even self-scalable applications. Since AWS is completely based on web services not just for usage but for resource management also it should be possible to build an application detecting load peaks and starting up new nodes - all automatically.
Unfortunately, this cool technical feature also adds new attack vectors for black heads. New attacks may be based on the pricing model which is "pay only for what you use". But, exacly that. Scince your application spreads automatically over an arbitrary number of new nodes and consumes an unlimited amount of resources attackers do not need to DDoS your application but just to run a DDoP. This is a "Distributed Denial of Payment" attack.
Their bot nets just need to use the application the ordinary way. As load grows your resource usage grows and the same does your depts.
There is a reason why startups use open source software, PostgreSQL for example, and not an Oracle with a per processor license. The same reason may let them choose a hosting solution where they pay what they have and not what they use. A startup company may soon reach the financial limits in a DDoP condition. So read my lips: don't miss to add some kind of throttle and a good bot and crawler detection when you enter the cloud.
You may also read this article about "Cost Allocation" as a new computing resource affecting algorithms:
Programmers love to talk about performance but only a few talk about scalability, despite of the fact that it is only scalability that counts. Scalability is about how your effort has to grow with your success. This is what you must have in mind if you start a business for the web.
It is always a good idea to plan a new project for scalability from the first steps on. There are some easy task with just a little overhead in development but great benefit if your project reaches the level of success you've intended.
- divide and conquer in size and time
- use a good abstraction layer for data access to be free if you need to partition data
- avoid complicated joins and excessive normalization, they'll kill you if you have to distribute data over multiple data bases and mashines
- do you really need relational database schemas for everything or may a simple key-value-storage work as well
- cache data access from the beginning
- cache more, compute less
- use functional decomposition, partition your system in tiny and efficient units and plug them together by abstraction
- use asynchronous strategies to manage load peaks
- split static and dynamic content carefully, soon you may need a CDN to deliver static content
- apply a good deployment strategy with rollback
- measure and monitor performance and scalability systematically
- scale your revenue in parallel with your technology
Don't miss the last point. It's the most important.