Agile User Stories and Application as Persona


In Agile, requirements are communicated via user stories.  Traditional user stories use the form, “As a …, I want to …, so that I can ….”.  For example, “As a first time user, I want to be able to create a user name and password, so I can securely do transactions.”   That enables functional requirements to be captured as user features, but still does not address the often critical “non functional requirements” (NFR) in an Agile framework.  NFR are items sometimes associated with infrastructure, such as capacity, scalability, reliability, availability, or security.

One approach to including NFR in an Agile context is to create user stories based upon the application itself as having a role or persona.  For example:

“As an application, I want to be able to process xxx transactions per second, to be able to handle peak monthly demand.”

“As an application, I want to be resistant to SQL injection attacks, so that my transaction integrity remains valid.”

Appropriate acceptance criteria need to be included; phrasing those can be challenging in these cases.

Extending the concept of the application persona further, it can also be used for other aspects of the application, such as user interface or user experience.  Those may fall outside the straightforward transactional features covered by more typical user stories; use it for the more qualitative ones, the fuzzier parts.  The usual guidelines still apply: make the story as granular as possible;  and to include acceptance criteria.  Acceptance criteria can be very difficult to create when one is trying to specify aspects of user experience as quality or beauty.

Taking this further still to the more general case of product development, including the difficult cases of ad campaigns or branding, the concept of the application or product as a persona remains tenable.

E.g., “As a brand, I want to be able to highlight the excitement of using our products, so that I can …..”

Cultivating the Zone

In the zone: usually we think of this state in terms of athletes, or creative artists: musicians, writers, dancers, poets.  Yet it’s not just for individuals, but for teams and groups as well; an example is the hyperproductive IT developer teams, mostly associated with XP and Agile practices.  Psychologists seem to prefer the term, “in the flow”, and that’s how you’ll find it in Wikipedia.

Think of it as an alignment of deep skills and a clear, achievable, meaningful task; a state of resonance between reality of the moment, the task to be accomplished, the skills brought to that situation, and an exceptional focus on that task.

It’s not just highly productive, or good solid work; it’s exceptional, out of the everyday mode of doing.

It’s never guaranteed; no sure recipes for entering into that state.  It cannot be forced.  But it can be encouraged and cultivated, nourished; allowed to happen.  Impediments to it identified and removed.

Some conditions probably necessary to achieve this state are:

  • The person or team is skilled in their field, perhaps deeply skilled.  A sense of mastery, pride in those skills, and some confidence in their use.
  • A bit redundant, that they have to enjoy what they’re doing.
  • The task is well defined, perceived as achievable, but challenging.
  • A high degree of focus and concentration; absorption in the task at hand.
  • This absorption can result in a feeling of being one with the task. The person becomes the work.
  • A feeling of fluidity, less difficulty than normal, performance seen notably above the usual.

How can we cultivate the conditions to help get into the Zone?  We, as individuals, as managers, team members, project managers, setting up an environment that allows this to happen:

  • Minimize distractions and encourage a sense of focus.  For some this means suitable music; for others, quiet; for some teams, something to impart energy. Perhaps a sign outside the team area, “Do not stick fingers in the cage.”
  • Make sure that tasks and goals are well defined and achievable. But don’t micro-manage.
  • Create an environment of minimal worries.  A sense of safety, isolation from threats.
  • Encourage a culture of mastery of the relevant skills, so that the basic tools are extensions of our thoughts. The thought effortlessly becomes the action towards the goal.

Don’t be concerned about rewarding this behavior: being in the zone, feeling that flow, is its own reward.

 

 

What’s Missing From Your SDLC

Having an SDLC (Software Development Life Cycle) documented is not to provide a minimum effort on development, or something to ignore, and keep writing programs as the mood strikes.  It’s part of an effort to follow best practices, to deliver high quality software on a repeated basis.  Even Agile methodologies can have an SDLC.

SDLC’s usually cover the core of the process of software creation, from initial requirements through design, coding, testing, integration, user acceptance, etc.  Some key elements are often overlooked, or considered not to be part of software development.  I’d like to suggest that you think about some areas that might improve the quality of the software you create if they are part of the SDLC.

Production support is not just about maintenance and fixing defects.  It’s about the ongoing improvement of what you delivered, preparation for the next release; part of the SDLC.  It’s about empowering the users, giving them the ability to fix their own problems when possible and reasonable; giving the right tools to the first level of support.  Not just documentation, an FAQ, or online Help, but items like creation of an error log and error trapping.  If the application has a database or data store, did you design a quality check on stored data, not just relying upon front end edit checking?

Does the application offer its users the opportunity to report problems, or give feedback about the interface they use?  What are the processes you envision for the ongoing improvement of the product?

Some SDLC’s include a Lessons Learned as part of closing out the development effort, project wrap-up.  The problem with that is it doesn’t do any good for the project in question, since that’s over, and seldom is examined either for the next release or by other projects.  Taking a page from the Agile approach of a retrospective for each iteration (sprint), why not do a Lessons Learned at each major milestone in the project, so the project can actually benefit while still in progress.

Security reviews should be part of the SDLC for all applications, not just web apps, not just for external facing ones.  How vulnerable is your application if another behind the firewall is compromised?  Security is part of the design process, not just something during the testing phase.

Security challenges are constantly changing, and some level of awareness needs to be part of the mindset of developers, not just IT security professionals.  Look towards organizations like OWASP (Open Web Application Security Project) as a resource.  Their Top Ten Security Risk list might startle some of your peers.

https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

The Department of Homeland Security has some excellent resources, such as a National Vulnerability Database http://nvd.nist.gov/  as well as checklists and guidelines.  Look around their site.

Security is often a nightmare for developers, but there are people, organizations and tools to help you.

Look at your SDLC, think of the ongoing life of your applications, and what might enhance their quality.  The SDLC is an ongoing opportunity for process improvement.

 

Supporting Application Development

Methodologies or frameworks for the infrastructure to support Application Development (AD) is an interesting topic by itself, but one I’ve seldom seen mentioned or described, despite ITIL.  Even in companies that are following ITIL practices, the procedures are not often accompanied by feedback mechanisms to further improve things.

For example, do you trace incident or problem reports back to application change requests?  For major changes to application software, compare the post-implementation defect levels for different application groups.  Investigate the differences and consider ways to improve application development practices.

Many organizations believe that by designating support staff as level 1, 2 or 3, that they’ve established real procedures.  Other than “expected time to resolve” criteria, I’ve seldom heard of any real process analysis associated with those categories.  Nor have I heard of processes for a more organizationally flat structure.  A process needs to be more than assigning a severity level and contacting a software vendor.

What are your company’s infrastructure processes for supporting AD efforts?  What are the feedback mechanisms in those processes, such as periodic reviews?  If you were an infrastructure manager, how would you know how effective has been your support for AD, and where there may be perceived deficiencies?

Does each organization have a clear view of major plans and events for the next 3 to 6 months?  Does the AD area have any clues as to when systems is thinking of upgrading the version of the DBMS, or other essential software?

Hint to infrastructure:  if an AD team is pedal to the metal on delivering to the business, 2-3 weeks notice that you’re planning to upgrade to a new version of some key component is likely to be greeted by less than rabid enthusiasm.

Hint to the AD manager:  if you have not budgeted in time, resources and tooling (test scripts, etc.) for at least one version upgrade per year of the DBMS, and at least once every other year for the OS, some rude surprises may await you.

A tale of non-communication, with details altered out of sympathy:

Application development has a software product from a vendor, with much in-house enhancement.  In their published plans for the next quarter, AD had included an upgrade to a newer version of the product.  They knew that a newer version of the DBMS than currently used by the application would be needed for it, but since that newer version was already in use in-house by other applications, they were not concerned.

When the specific requirements became official requests to the infrastructure, the fun began.  The newer version of the DBMS required a newer version of the operating system than was currently used by that application.  And their servers were only marginal for the newer level of the OS; a hardware refresh was recommended.

The newer version of the OS not only brought along new run-time libraries, but used newer versions of compilers.  Even without any hardware refreshes, the regression testing alone for the DBMS and OS upgrades added significant time to the overall project.

Lesson learned:  frequent regular review of upcoming plans, including details as “DBMS version xxx needed.”  A review, meaning a meeting (even if virtual), with some communication, should be held.  Make sure the right people are talking to each other.

Rejuvenating the Infrastructure

The lack of agility of the infrastructure, especially the data center and systems programming/ administration is so obvious that the nascent DevOps movement has arisen to combat this perceived deficiency.  Much of that movement’s visible contribution has focused on automation and tools (e.g., Chef, Puppet, cfengine, Nagios) rather than on frameworks or processes.  Not surprising, given the talent for coding of many of the DevOps advocates.

There are other obstacles:

The Data Center values stability of the environment, and resists change; change is seen as a risk, not as any form of added value.  In general, perhaps especially for larger IT shops, other than changes due to bug fixes or security patches, the data center is isolated from positive aspects of application and environmental changes.  New metrics could provide a more flexible approach along the lines of increased Business Value (BV).  Some places, especially those along ITIL lines, may use a metric for risk as part of Change Control, but I have not heard of a corresponding one for enhanced BV.  That BV metric could be a 1 to 5 scale, or a set of numeric guidelines.

Much of the effort of the infrastructure is about process, with solutions well defined.  The challenges are more along the lines of:

–       Sheer scope of tasks, as the whole enterprise may be involved, with associated risk of that dimension.

–       Interdependencies

–       Complexity

–       Volume of work (especially with sharp peaks)

–       Lack of teams

For volume, automation and tools can help; one should also investigate frameworks like  Kanban, limiting the work in progress at any moment.

The lack of teams in infrastructure remains a source of concern for me.  Teams, behaving as teams, not just silos of individuals with common product-oriented expertise.  For example, a group of system administrators each with distinct separate sets of servers to manage, with no or little collaboration between members of the group, is not a team. The situation not only deprives the company and the participants of the benefits of a team approach, but reinforces the sense of isolation and endless repetition of upgrades and patches.

In my prior blog post on Skill-Centric Teams, a key point is that teams can be created across functional silos, basing them on the shared skills and challenges.  While there are distinct differences between a UNIX system administrator, the Windows counterpart, and their mainframe brethren, but there is so much in common, including, but not limited to:

–       Issues of change management

–       Communication with developers

–       Application development (AD) support methodologies or frameworks

Application development infrastructure support frameworks deserve a separate post.

 

Database Vulnerabilities and NoSQL

Information Security professionals state that perimeter security alone is not sufficient; just having network firewalls is not enough to secure your environment.  Both devices and people can circumvent perimeter security.  Devices:  how would you discover if a wireless router was placed on your network?  People:  do you ever use consultants, temps or contract workers?

There are many ways DB’s are vulnerable, with new ones occurring with new features and new releases.  How current is your DB on security patches? Did any of your DB installs create demo or sample databases?  While those don’t have sensitive data, they do represent security exposures.

At a recent chapter meeting of a web application security group, OWASP, some vendors were present.  Chatting with one, Application Security (http://appsecinc.com ), I was glad to learn about a tool that deals with database vulnerabilities, DbProtect.  I’m sure it has competitors with similar functions, but to me, not an InfoSec specialist, it was very impressive, covering most of the common distributed DB’s.

While the mainstream relational DB’s have tools to help address vulnerabilities, the newer NoSQL data solutions do not, yet are subject to many of the same types of vulnerabilities.  Use of MongoDB, CouchDB, Riak, Redis, Hadoop, MapReduce, etc. continues to rapidly grow.  And these are being used in more business-critical applications, or with sensitive data.

Even if used for non-critical applications and non-sensitive data, they still can present a risk, for some may have accounts with privileged access to the critical DB’s, the relational ones; or they may provide elevated access to the operating system.

If you’re using a NoSQL DB, check with your vendor for security assurance tools, or at least some recommendations for those.  Make inquiries of existing DB security vendors;  even if they don’t currently have a suitable tool, they might respond to market demands and create one.

Even if insufficient when compared to commercial products, in the absence of a suitable tool, consider creating a tool, script or checklist to seek common vulnerabilities. Change passwords to any DB default accounts, and use strong passwords (not ones that might appear in a dictionary.) Check for vendor patches frequently.  Keep posted about security exploits against that specific DB.  Something like a Google Alert using a string based upon the DB name (e.g. Sybase) and “security exploit” could bea s good starting point.  Keep learning about DB security.

Feeding the Elephant – The Release of Java 7

Java was born of the intelligence and generosity of Sun Microsystems; over the past 15 years, it has evolved, grown in features and put down roots in an incredibly large part of the technology landscape.  The generosity of Sun made Java open source (for the most part), furthering its spread.

New releases of Java occurred about every two years through 2006.  Sun was in trouble as a company, and Java languished, with no new release in 2008 or 2009.

Oracle bought Sun, with all of its assets, including Java.  Oracle already had some presence in the Java environment, due to their earlier acquisition of BEA, noted for its WebLogic app server, and for its high performance JVM, JRockit.

With the release of Java 7, the first since 2006, Oracle appears to be positioning Java for renewed life.  Note that Java 7 SE is still not available for the Mac, nor was a projected date for that mentioned.

There’s only a few noteworthy new features, such as the new I/O library (NIO2); better directory support; symbolic link support; features to take advantage of multi-core processors (Fork Join framework); etc.  It’s more of a new foundation for work to come, such as the merge of JRockit with Sun’s Hot Spot JVM.

Java benefits from a large talent base of developers, highly experienced, with many enthusiastic about the language despite the lack of enhancement in recent years.  At the local JavaSIG meeting, several developers expressed concern about Java over the longer term.  The language for some was not only mature, but starting to feel a bit dated.  It’s not clear to me how much of that is valid and objective, or based in emotional reaction that it is now Oracle in charge of Java’s future.

Oracle has smart product managers, and an energetic marketing arm.  They surely recognize that they need to win the confidence and trust of the development community and those making strategic decisions for technology.  This release of Java is a reasonable first step.  Next one should be a detailed road map, and timely delivery of the items on it.

The strategic question some ask, “Will Java offer the performance and features in 5 years that we need to stay competitive?”  The jury waits on Oracle to prove its commitment over the next two years.

Java for Oracle may wind up being like the story of the person who bought an elephant, and then was faced with the cost of caring for and feeding it.