Will containers take over ?

and if so why haven't they done so yet ?

Unlike many people think, containers are not new, they have been around for more than a decade, they however just became popular for a larger part of our ecosystem. Some people think containers will eventually take over.

Imvho It is all about application workloads, when 8 years ago I wrote about a decade of open source virtualization, we looked at containers as the solution for running a large number of isolated instances of something on a machine. And with large we meant hundreds or more instances of apache, this was one of the example use cases for an ISP that wanted to give a secure but isolated platform to his users. One container per user.

The majority of enterprise usecases however were full VM's Partly because we were still consolidating existing services to VM's and weren't planning on changing the deployment patterns yet. But mainly because most organisations didn't have the need to run 100 similar or identical instances of an application or a service, they were going from 4 bare metal servers to 40 something VM's but they had not yet come to the need to run 100's of them. The software architecture had just moved from FatClient applications that talked directly to bloated relational databases containing business logic, to web enabled multi-tier
applications. In those days when you suggested to run 1 Tomcat instance per VM because VM's were cheap and it would make management easier, (Oh oops I shut down the wrong tomcat instance) , people gave you very weird looks

Slowly software architectures are changing , today the new breed of applications is small, single function, dedicated, and it interacts frequently with it's peers, together combined they provide similar functionality as a big fat application 10 years ago, But when you look at the market that new breed is a minority. So a modern application might consist of 30-50 really small ones, all with different deployment speeds. And unlike 10 years ago where we needed to fight hard to be able to build both dev, acceptance and production platforms, people now consider that practice normal. So today we do get environments that quickly go to 100+ instances , but requiring similar CPU power as before, so the use case for containers like we proposed it in the early days is now slowly becoming a more common use case.

So yes containers might take over ... but before that happens .. a lot of software architectures will need to change, a lot of elephants will need to be sliced, and that is usually what
blocks cloud, container, agile and devops adoption.

Will containers take over ?

and if so why haven't they done so yet ?

Unlike many people think, containers are not new, they have been around for more than a decade, they however just became popular for a larger part of our ecosystem. Some people think containers will eventually take over.

Imvho It is all about application workloads, when 8 years ago I wrote about a decade of open source virtualization, we looked at containers as the solution for running a large number of isolated instances of something on a machine. And with large we meant hundreds or more instances of apache, this was one of the example use cases for an ISP that wanted to give a secure but isolated platform to his users. One container per user.

The majority of enterprise usecases however were full VM's Partly because we were still consolidating existing services to VM's and weren't planning on changing the deployment patterns yet. But mainly because most organisations didn't have the need to run 100 similar or identical instances of an application or a service, they were going from 4 bare metal servers to 40 something VM's but they had not yet come to the need to run 100's of them. The software architecture had just moved from FatClient applications that talked directly to bloated relational databases containing business logic, to web enabled multi-tier
applications. In those days when you suggested to run 1 Tomcat instance per VM because VM's were cheap and it would make management easier, (Oh oops I shut down the wrong tomcat instance) , people gave you very weird looks

Slowly software architectures are changing , today the new breed of applications is small, single function, dedicated, and it interacts frequently with it's peers, together combined they provide similar functionality as a big fat application 10 years ago, But when you look at the market that new breed is a minority. So a modern application might consist of 30-50 really small ones, all with different deployment speeds. And unlike 10 years ago where we needed to fight hard to be able to build both dev, acceptance and production platforms, people now consider that practice normal. So today we do get environments that quickly go to 100+ instances , but requiring similar CPU power as before, so the use case for containers like we proposed it in the early days is now slowly becoming a more common use case.

So yes containers might take over ... but before that happens .. a lot of software architectures will need to change, a lot of elephants will need to be sliced, and that is usually what blocks cloud, container, agile and devops adoption.

Will containers take over ?

and if so why haven't they done so yet ?

Unlike many people think, containers are not new, they have been around for more than a decade, they however just became popular for a larger part of our ecosystem. Some people think containers will eventually take over.

Imvho It is all about application workloads, when 8 years ago I wrote about a decade of open source virtualization, we looked at containers as the solution for running a large number of isolated instances of something on a machine. And with large we meant hundreds or more instances of apache, this was one of the example use cases for an ISP that wanted to give a secure but isolated platform to his users. One container per user.

The majority of enterprise usecases however were full VM's Partly because we were still consolidating existing services to VM's and weren't planning on changing the deployment patterns yet. But mainly because most organisations didn't have the need to run 100 similar or identical instances of an application or a service, they were going from 4 bare metal servers to 40 something VM's but they had not yet come to the need to run 100's of them. The software architecture had just moved from FatClient applications that talked directly to bloated relational databases containing business logic, to web enabled multi-tier
applications. In those days when you suggested to run 1 Tomcat instance per VM because VM's were cheap and it would make management easier, (Oh oops I shut down the wrong tomcat instance) , people gave you very weird looks

Slowly software architectures are changing , today the new breed of applications is small, single function, dedicated, and it interacts frequently with it's peers, together combined they provide similar functionality as a big fat application 10 years ago, But when you look at the market that new breed is a minority. So a modern application might consist of 30-50 really small ones, all with different deployment speeds. And unlike 10 years ago where we needed to fight hard to be able to build both dev, acceptance and production platforms, people now consider that practice normal. So today we do get environments that quickly go to 100+ instances , but requiring similar CPU power as before, so the use case for containers like we proposed it in the early days is now slowly becoming a more common use case.

So yes containers might take over ... but before that happens .. a lot of software architectures will need to change, a lot of elephants will need to be sliced, and that is usually what blocks cloud, container, agile and devops adoption.

Is Presenter First Still Valuable to Modern App Architecture?

Thijs van Dien wrote to us in early 2015 with some great questions about Presenter First’s place in application architecture in the post-MVC era. His well-researched questions were a joy to respond to; while there’ve been many advances in desktop and mobile programming patterns since we first wrote about PF back in 2007, we still find value in the core aspects of Presenter First. I’ve captured our email conversation here for broader sharing and posterity.

Q: What stance does PF take among the more well-known patterns as described by Fowler? Even though it’s called MVP, the role of the Model and Presenter feel rather different. In most classic MVP implementations, the Model is in principle a Domain Model or Business Logic Layer, whereas in PF it’s closer to an Application Model.

Martin Fowler retired his take on Model-View-Presenter pattern and split it into two related patterns: Supervising Controller and Passive View. Structurally, Presenter First most closely resembles Passive View.

As Thijs observed, Presenter First treats a triad’s Model as an application model more so than a data model. Though it may provide access to data, a Model also provides events and actions that potentially create side-effects in the system.

“Application model” is a term we use to differentiate strongly from “data model”, but is not intended to necessarily differ strongly from “domain model” or “business logic layer”. In many applications, our models map very neatly to the business and its domain.

Presenters are best situated where they can translate gestures into activity in the app, and to convey changes in the app back to the user. They provide a strong layer of insulation between the UI and app logic, as well as the semantic bridge between two subdomains of your application. Your View code has no dependency on the Model, and vice versa.

Q: Is PF an application architecture pattern as suggested in the 2007 paper? In his blog post, Anthony Ferrara explains that the biggest problem with implementing MVC, MVP etc. is that they pretend to be application architecture patterns when in fact they are not.

We identify Presenter First as an architectural pattern, or micro-architecture. It’s neither an application architecture nor an application template: PF doesn’t define high-level application architecture, it doesn’t address data modeling or persistence, it doesn’t speak to workflows or UI planning, and it doesn’t show you how to initialize a complex application.

Q: What’s the key to deciding when to introduce a new triad, or rather to extend an existing one? Often I find functionality on one form to be so related and interdependent that any kind of split quickly feels arbitrary. There are many actions to take when one thing happens, but the components involved also have independent functionality. I’m afraid all that forwarding and implicit behavior of events will result in a “no forest for the trees” situation: where did the application go?

A typical application will have perhaps dozens of triads. We tend to match them up to user-verifiable features, such that we won’t need to re-open the classes that comprise a PF triad (or quad, or quint, depending on how you’ve extended the pattern) unless there’s actually a change in the feature it supports.

See my post on getting your triads talking for an intro to connecting features behind the scenes.

Beware mixing different granularities of interaction. “User wants to sign up” is not the same level of concern as “User selected India in the Country drop-down, so the States drop-down needs to update accordingly.” When developing more sophisticated UIs, it may make sense to use PF to create a rich, encapsulated AddressWidget (AddressModel, AddressPresenter, AddressView), and serves as the “View” for a domain-level “SignupPresenter”.

Q: What is the appropriate abstraction level between each of the members of the triad? I’m referring to the granularity of the messages, and the reusability of each triad member. For example, given a simple user story, like “when the user submits, generate a PDF with the form data and send it to the specified email address,” what, in your opinion, is the more appropriate approach for the Presenter? (See examples below.)

Example 1

OnSubmit:

formData = view.FormData();
validationResult = model.ValidateFormData(formData);
if (validationResult != ValidationResult.Valid) {
this.ShowValidationError(validationResult); // Determines the message and shows it on the View
return;
}
pdf = model.GeneratePdf(formData);
email = view.Email();
if (!model.ValidateEmail(email)) {
this.ShowEmailError(email); // Determines the message and shows it on the view
return;
}
model.EmailFile(email, pdf);

What I like is that the Model is stateless, but what I don’t like is the Presenter has to ask all the right questions and does lots of data shoveling; the Model has to either trust its input (not a good thing, I suppose) or perform all the validation again.

Example 2

OnSubmit:

model.EmailFormAsPdf(view.FormData(), view.Email());
OnInvalidFormData:
view.DisplayValidationResult(model.LastFormValidationResult());
OnInvalidEmail:
view.DisplayEmailError();

I think it’s better the Model keeps the knowledge to itself (and it’s asking for forgiveness instead of permission), but what I don’t like is that the role of the Presenter is so minimal that in many cases it’s pure boilerplate; the Model has to reflect user story exactly. Perhaps that’s justified just to keep the Model and View from talking directly. The asynchronous nature requires very specific events and returning results as properties (more boilerplate).

Example 3

OnSubmit:

formData = view.FormData();
result = model.TryEmailFormAsPdf(view.FormData(), view.Email());
if (result == EmailFormResult.InvalidFormData) {
view.DisplayValidationResult(result.FormValidationResult());
} else if (result == EmailFormResult.InvalidEmail) {
view.DisplayEmailError();
}

Similar to the above, but the Presenter needs to know more about the Model.

Example 4

OnFormChange:

model.FormData = view.FormData();
OnEmailChange:
model.Email = view.Email();
OnSubmit:
model.Submit();
OnInvalidFormData:
view.DisplayValidationResult(model.FormValidationResult());
OnInvalidEmail:
view.DisplayEmailError(model.EmailError());

This is what I considered an “Application Model”. At all times, the Model represents the state of the application, regardless of the View and Presenter. It seems like a ViewModel with behavior, however. And it gives the Presenter a merely synchronizing role.

This is a great set of examples on how you can emphasize more or less detail in your implementation style. (Thijs’s own comments on each example do a fine job of pointing out their pros and cons.)

Thinking back over the years of building apps using PF, it seems like we evolved our Presenter style from 1 to 3 to 4 to 2.

Example 2 is my favorite kind of Presenter. A Presenter is code that says “this is how our application will service user interactions.” When users submit their request for membership, we email a PDF to the HR department (or we warn the user they’ve made a mistake in filling out the form). The code should say these things and little else. In order to perform this function meaningfully, they need to occupy a particular niche in your architecture.

I once saw Uncle Bob Martin stress that a major goal of your architecture should be to define a clean, obvious boundary between your application and the details of its implementation. Why should the underlying architecture of an application change noticeably just because we reverse the request/response mechanism and use HTTP instead of mouse events? Or Rails instead of WPF? Or a local MySQL database instead of a REST service? (I believe that talk evolved into this talk on Confreaks… a diagram from that talk provides an interesting visual example of Uncle Bob’s point.)

I’ve come to realize that our best Presenters occupy space near Uncle Bob’s “boundary” lines, just like the Presenter in the diagram. If you think about your Model as a programmatic facade of your application, a Presenter is a piece of software that knows how to connect a user to the features available in the application. (Note that although the diagram is a sample drawn over a Rails application, Uncle Bob succeeds to separate the architecture from its details enough such that I can re-appropriate it for a discussion on GUI programming.)

The value of considering your Presenters first is you’re forced to draw segments of that boundary line before you can get started. While this isn’t enough to generate an entire architectural vision, it can help frame high-level discussions about connecting users to your business domain, before you start writing schemas and form-layout code.

For further reading, you can check out all the other Presenter First articles here on Atomic Spin.

The post Is Presenter First Still Valuable to Modern App Architecture? appeared first on Atomic Spin.