You deliver software. That's what you do. And it can be frustrating when things take too long, when bugs pop up, or when things break in production. But you have what it takes. Programming with Palermo can help improve your confidence by delivering timeless knowledge, moving unnecessary obstacles, and regaining excellence within your development team; opening up to their full potential. Simply - simplify!
You can find the code used in this video at the Clear Measure github
In this episode, Jeffrey shares how to lower developer onboarding costs
SituationCustom software is inherently expensive but there are plenty of easy things that your team can do to reduce those costs. I'm going to talk about one of them that aids tremendously when it comes to adding or replacing a developer on your software team. That is the one click build.
MissionAnyone overseeing a software team cares about quality, efficiency and productivity. These are important because they translate directly to labor costs. Software teams are already expensive. What really hurts is when the team has suboptimal processes that balloon already high costs. When a new developer joins a team, many spend days or weeks onboarding until he can start working on the code and contributing code changes. It doesn't have to be this way. You should expect a new developer to be able to contribute code changes on the first day.
ExecutionLet's go through a scenario. A new developer is ramping up on the team and he is eager to start making contributions. He wants to get the code up and running on his computer quickly. So, what's the first thing we do? We clone the repository from source control and then try to run the application. Invariably this fails. Why? Well, first off, there's plenty of dependencies that the local developers workstation doesn't have. Namely, the SQL Server database and then probably several other dependencies that must be installed or must be set up in a certain way. The experienced members of the team have these steps memorized in their heads but of course, this is super secret tribal knowledge to the newcomer. Maybe there is a documented list of things that are necessary for proper developer workstation setup. If the list is kept properly then the new developer can follow the steps and get the application working. What invariably happens every time is that a more tenured member of your software team takes time out and helps the new developer get the software running on his workstation. You're always going to have the overhead of explaining to the new developer what the application is and how it's put together and the thought process behind it, but the time that is wasted is just the mechanics of getting the application running on a new workstation. This cost also exists when an existing team member is setting up a new computer. When setting up a new computer for the first time, the same setup has to happen.
It's all unnecessary. What you should expect from your team is that the new computer or new team member experience is quick and automatic. The process should be two steps. First, clone the source code. Second, run a single command and then the application works. The one click build, as it is called, is a very simple script that checks for the needed dependencies on the local computer and installs them. If it's a dependency that does not have an unattended install, then it can prompt the developer with a clear error message with what software needs to be installed. But in today's day and age, most developer dependencies can be installed automatically. The most basic of these is the SQL Server database that all net applications connect to. Even small microservices are responsible for their piece of data and require some type of data store to be set up.
ConclusionTo conclude, expect new software team members to contribute code changes immediately. Equipping them with the right onboarding process is your key to this reality. And a one-click build is a tool no software team should be without.
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following: Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey shares how to measure a software team.
SituationMany software team lead architects don't implement management practices that are standard in other parts of the business. Whether it be OKRs (Objective, Key Results), EOS, Scaling Up's Scoreboard, or Kaplan's Balanced Scorecard, business measurement has long been a staple of ensuring that a part of a business was functioning well. But executives overseeing software teams often don't have a tool for measuring the effectiveness of a team or an entire software department.
MissionAnyone overseeing a software group of any kind needs a way to measure the effectiveness of that group. Let's zoom down to a single software team and look at what must be measured at a minimum for a single software team. Once the measures are identified, the team can then report them weekly to the appropriate layer of management. And just like every other department, if the measures are aligned with business objectives, then the reports can be relied on to know that the objectives are on track to be accomplished.
ExecutionThe tool you need in order to measure a software team is a good, old-fashioned scorecard. It's not high-tech. Every business methodology in the last 3 decades has employed some format of the scorecard for the purposes of measurements that are tracked over time and given thresholds of acceptable values. We'll go over the Clear Measure Way scorecard template and how to use it.
Mental ModelFrom cash flow forecasts to sales pipelines and order shipping, most businesses are used to tracking numbers weekly. Some numbers are tracked monthly, but in software, weekly is better aligned with the normal flow of a software team. You can obtain the Clear Measure Way scorecard template for free from the Clear Measure website. It's a Microsoft Excell worksheet. The first tab is the scorecard itself. The next tab is instructions for how to use the scorecard. It comes prepopulated with the minimum suggested measures for a software team. As you become more comfortable with it, you'll undoubtedly add more measures to it. The researched DORA metrics are part of our minimum, so you'll find those on the scorecard.
At the top of the scorecard template, you'll find a link to a tutorial article that explains how to use the scorecard and how the Excel template is put together.
Each week, you'll have the team populate the numbers in the column that represents the current week. Over time, you'll probably choose to hide the rows in the past so that you can glance at the current week and probably the trailing 12 weeks, thereby getting a good glance as a rolling quarter of performance.
Team AlignmentThe measures on the scorecard are divided by the pillars of the Clear Measure Way but are preceded by a Team Alignment section. We suggest that the software team's scorecard include the top-level business measures that are managed by the executive overseeing the team. Without tracking the impact the software team is making in the business, it's easy to become misaligned with business objectives.
If you don't already have these quarterly targets, I'd invite you to use the free Team Alignment Template, also provided by the Clear Measure Way. We have plenty of information about how to align a software team to become effective. Once it's clear what the team is trying to accomplish, add those few measures to the scorecard. If the measures have an acceptable threshold, add that into column F. This will cause the auto highlighting to work, coloring green for numbers within the thresholds and red for numbers outside the threshold.
Establishing qualityThe first pillar we suggest you measure is quality. This should be the first priority of any software team. Without it, a team cannot be effective. Without consistently high quality, the team will constantly be circling back to diagnose, analyze, and fix defects. This tends to accumulate, and teams without quality end up having little time left over to actually work on new features or valuable changes.
We recommend a few essentials when it comes to measuring quality. - Defects Caught - Defects Escaped - Defects Repaired - Mean Time to Resolve
Ultimately, you want zero defects to escape into production. But you also want to track the defects caught before production. Think about it, every time to move a card to the left on your work tracking board, that signifies a problem that has to go backward in your process to be corrected. That's a defect. Track it.
Achieving stabilityThe stability pillar looks at what is happening with software running in a production environment, serving customers. Two of the DORA metrics live here as well as a couple of others. Our goal is to empower our team to deploy changes frequently to production and at any time during the week, all without business disruptions. Additionally, we want to know that the software runs in a way that supports the users, again, without business disruption. Software spends most of its useful life in a state of slow changes but running day to day yielding its return on the investment made in it. The slow changes are mostly changes required so that they can be properly maintained. The measures we recommend for this pillar at a minimum are: - Number of deployments - Major production issues - Minor production issues - MTTR, mean time to recovery
Regardless of the service desk system you use to track production issues, there are always more statuses than you need, so choose which statuses represent a business disruption and which ones do not. There will always be production issues from time to time. The key is to never have a business disruption due to it.
Increasing speed (productivity)Our last category on the scorecard is for the Speed pillar of the Clear Measure Way. This is where we track the productivity of the team. The throughput of new features and valuable software changes. It is appropriately last because quality and stability must take priority over it if we have any hope of speed that is acceptable to the business.
These measures are very simple and follow the DORA model as well. - Items Delivered - Work in Process (WIP)
Because we are tracking a new value every week, we know the cycle time by comparing the number of items in process and the items delivered. Kanban research has some good findings for thresholds of WIP that tend to work. My favorite is to start with a value equal to the number of members of the software team. This allows each team member to be working on one item at a time. Then, as you are confident, you can increase this threshold as you verify that the items delivered each week are increasing and not decreasing.
When it comes to measuring the throughput or speed of your software team, those two are typically sufficient. But as you go along, your team may want to measure additional items if you find them valuable.
Mechanics of measurementOne often cited reason for software teams not reporting up to executives with a scorecard is the administrative time it takes to compile the numbers, put together the report and answer questions that invariably come back down. But any department in any business could give those same reasons.
Forty years ago, Fred Brooks wrote an essay in his Mythical Man-Month book called "The Surgical Team". In this essay, he lays out a framework for the ideal software team structure. Effective large teams end up with a network made up of many of these team units. Large teams typically aren't effective without subdivision into a structure similar to Mr. Brooks's structure. This is probably where the notion of "Feature Teams" came from, but I digress. In this essay, Mr. Brooks discusses a team secretary. This role is responsible for all the recordkeeping, logistics, and outside communication for the team, much like what is necessary for a surgical team in an operating room. Surgeons need to stay focused on the patient, so there is a need for someone to enable them to do just that.
Each team should have a non-engineer responsible for administrative excellence for the team. Without this role, we frequently see teams that underperform not because of a lack of engineering prowess, but because of completely non-technical administrative misses. In short, managing a scorecard is an administrative task, so it should be done by someone strong in that area, even if an engineer is asked for a particular number.
ConclusionTo conclude, every effective software team needs a scorecard. The scorecard is the basis for a periodic report to a company's executive team. It answers so many questions, such as "how much can my team deliver". Without a scorecard, all we know is how many hours the team works. It doesn't do a company much good to have a team that works 100-hour weeks if the production system is brittle and new changes take months to implement. A scorecard tells us how our team is doing now, and it tracks our progress as we implement the principles and practices in the Clear Measure Way on our journey to a fully effective and high-performing software team.
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey shared how an executive oversees a software team
SituationOur industry struggles mightily with failed software projects. On average half of the projects still fail. Failure is defined as the executive who authorized the budget wishing he hadn't. The project is so over budget and so over schedule, that the company would be better off having never started it. Even in the middle of these projects, executives can feel powerless to abort it for fear of sunk costs. And without knowing the right questions to ask or the right reports to demand, the executive in charge doesn't feel in charge at all. He's left choosing between trusting the team still longer or the nuclear option to scan the entire thing.
MissionRight now, if you are an executive overseeing a software group, I want to equip you with the tools to do that well. If you work in a software team, use this video to give your software executive the information he needs to know the project is on track or the insight to know what the team needs to do a good job.
From here out, though, I'll call you the software executive. Even if you've never programmed any code, you are running a software team. Their success will steer your future career, so this is important. Don't keep going on faith. Don't proceed merely trusting that someone else reporting to you knows how to do your oversight job for you. Lean in. And I'll give you the questions to ask, the tools to use, and the practices to deploy so that you can safely guide your software project to success. And most importantly, if your current software project is veering toward failure, I'm going to empower you to stop the bleeding and get it back on track.
ExecutionBefore diving into the guidance, I want to paint a mental model for you. Think of every other department in the company. Think of every group. Think of every team branch on the org chart. Each one of them is responsible for delivering some output valuable to the business. And each of these teams also needs some inputs in order to deliver those outputs. And if the outputs are not delivered, the team's leader is typically replaced. And the leaders who excel are the ones that can set up the team members for success.
Mental ModelThe factory is arranged well and operates efficiently every day in a safe manner. The assembly line flows at a good speed with incoming materials being delivered at the right cadence to keep going. Quality issues are prevented and detected very early. Hourly and daily throughput measures are tallied and reported up the management chain. Quality and throughput measures are paired with acceptable thresholds and established as a standard with better numbers as stretch targets. Then, the executive in charge ensures that the factory or assembly line is organized in a way where each team member understands the job and what activities it will take to meet the targets.
What we don't do is declare a building to be a manufacturing plant, ask a team to come to work inside it, and then come back to check in a month later. The people we staff on the team are typically not the same people needed in order to design the process for how the team should work. And Scrum has done the industry a disservice by spreading the notion of self-organizing teams. Even the certified ScrumMasters are trained to ask the team what they want to do and then work to remove blocking issues to what they want to do. This isn't leadership. Only when a team is working in an efficient manner can the lower-level details be turned over for self-organization. An appropriate leader (you) is always necessary to put the overall structure in place for the team so that real measurable throughput can build momentum.
I started out with a factory and assembly line analogy. And many knowledge workers will rightfully object that the nature of the work is different. And it is. Earlier in my career, I was one of the self-organization promoters, and I was banging the drum about knowledge work being inestimable or unmeasurable. But speaking for myself, what I liked most about that message was that it gave me space to dive into the work without having to report up as much. It gave me more space as a programmer. But what it didn't produce was less risk for the executive who authorized the project budget in the first place.
This challenge exists in all the fields of knowledge work as well. Managerial accountants and CPAs also have tough problems that don't have rote solutions. The rote solutions have been automated away by good accounting software. But if your CPA takes forever to figure something out and then bills you twice as much as what you budgeted, you still have a problem. Sales is another area that has some similarities with the "magic" of software development. You want a certain pace of sales. And the staff just wants to get to work. But seasoned sales executives know that without a good sales team process, closed sales won't happen. And even enterprise sales that can take 3-6 months or longer don't just ride on the "trust me" message of the sales rep. Good sales executives put in place processes with measures. Number of leads contacted, number of meetings. The number of emails, phone calls, and networking events.
My goal if this introduction is to suggest that we dispense with any notion that software is too complex to be managed like other departments in the business. I've been managing programmers for 17 years. All we have to do is raise up the conversation from the technical jargon, and we can get to a place of business language where all the executive tools apply. Whether you like to use OKR's or EOS L10 meetings with a scorecard, or just regular weekly metrics, you can apply the oversight methods of your other teams to your software team. Let's get into it.
Team AlignmentBefore we discuss software-specific issues, let's apply what we already know about team formation and team alignment. If any team is going to be high-performing, it has to be aligned and going in the right direction. The old model of forming-storming-norming-performing applies just as well to the software team. And the Clear Measure Way's Team Alignment Template (TAT) provides a form to document the team's alignment. Just like other parts of the company, without consistently reinforcing the vision for the project and the business strategy that caused a software project to be commissioned, a team will stray. It's human nature. It has nothing to do with software. And regardless of what information is chosen, the team must send a periodic report to you, the software executive. After all, you are giving a report to your executive team or board of directors. And if you have no report from the team, then it's hard to do your briefing. So you need some form of a scorecard. The Clear Measure Way curriculum also includes a Software Team Scorecard Template you can use. We suggest the minimum set of measures to report. As time goes on, you'll want to add more.
Team SkillsJust like any other team in the business, if your software team doesn't have the skills needed to execute a particular project, you won't succeed. But if you haven't cataloged the required skills or taken an inventory of current skills, you don't know. And one of the peculiar traits of many software developers is their inventor personality. If you ask them "can you do _", they will answer "yes. I can do that". Even when they have never done it before. They will tell you they can. After all, Orville and Wilbur Wright said that they could make a flying machine. Turns out they did, but that process was invention, not implementation. To inventory your skills, you need to know what your team members _have done before, not what they believe they can learn to do. If you have a smartphone app project in front of you but no one who has ever put a smartphone app into production before, then you are missing a skill. This is just one example. But you can see again that any department in your business goes through the same skills planning. If your accounting department doesn't have anyone who has ever done inventory accounting for large warehouses, and you intend to build a warehouse, you would need to recognize this and augment the accounting team. There is no such thing as a "full-stack developer". Oh, you'll find it on the job boards, but "full stack" means very different things to different people. It depends on the technology stack. So if someone places "full stack developer" on their resume, you have to look at the projects they have done, which constrains their definition of "stack". In addition, some skills represent answers to strategic risks. Take security. Security breaches can tank entire lines of business. This is not just another technical skill. It's a department competency. So I encourage you to get specific in needed skills and current skills so that you understand the actual skills you have and the ones that are lacking. Then you can build a training and staffing plan for your project. Chances are some of your existing people can do some training and add some skills. Then there will be other skills that need to be sourced from the outside, either temporarily or with a permanent hire.
Establishing qualityWe all want our team to be able to move fast and deliver at a rapid pace. But from an oversight perspective, demand a quality output from them at whatever pace they can deliver first. Measure the pace they are delivering first with a certain quality. Think of when you were learning to type. The measure of typing speed is the number of words per minute with some number of errors. You know that 100 WPM with 100 errors doesn't do you any good because that doesn't represent 100 typed words. It represents 100 misspelled words that have to be fixed. You want 100 WPM with 0 errors.
Capers Jones, in his writing about software metrics, notes that teams who prioritize productivity think of quality as something to balance end up suffering from poor quality. Then, with bugs mounting up, more and more capacity of the software team is used to fix bugs. With less of the team's full capacity going to delivering features, productivity slows, creating more pressure to re-establish productivity. The team members then, under more pressure to perform, take more shortcuts in order to "get things done", but this just yields more bugs, which takes more team capacity to tackle. With a small fraction of the overall team's capacity dedicated to new features, the overall productivity tanks. Over time, this is the cause some teams to pitch a new plan to management: "We need to modernize this system", which is code for "fixing this is more effort than starting over from scratch." This is the equivalence to waving the white flag and surrendering. No army that surrenders can later claim victory.
As the software executive, this is where your leadership comes in. Sequence the establishment of a quality standard first, before challenging the team to increase the pace of delivery. Measure the number of bugs caught before production, and the number of bugs caught by users in production. Measure how long it takes to fix each bug. All of the modern work-tracking tools will do this well for you. This is the easy part. Your leadership is important here because you are establishing an important principle for your team to abide by. That principle is that quality should be prioritized over the speed of delivery. Because you know that adopting a speed-over-quality strategy yields neither speed nor quality. In weekly team meetings, which every team should have, ask the same question over and over. "Tell me about the bug that escaped into production. What are we changing so that kind of bug can never get to our users again?" Their answer will be different every time, but your question will be the same. Ask for a tour of the code that caused the bug. If you can't understand the explanation or the code, then you've found a quality hot spot that you'll want to ask more questions about. Don't believe the lie that "the code is too complex for you to understand." After all, you wouldn't accept that excuse from an electrician or any other trade. After all, the purpose of the software is to simplify a domain that has higher complexity without the software.
In any of the teams you oversee, you'll want to understand the engineering practices that are in place. Here are a few that every software team should be using. - Test-driven development - Continuous integration - Static analysis - Pull request checklists (a modern implementation of a formal change inspection)
I expect the team to have other practices in place as well in order to ensure that quality is kept to a high bar. Without these, your team will struggle unnecessarily to keep quality high on a multi-developer team.
Achieving stabilityOnce a team has the practices in place that enable code to be delivered that is free from defects (bugs), the next priority is to get it onto a production environment in a stable fashion. Chances are that you don't just have a new piece of software, but you have existing pieces of software in production that have some stability issues from time to time. Stability issues can have one or more of the following symptoms. - Sluggishness - Outages/goes offline - Error messages or frozen screens - Abnormal behavior/bugs that can't be reproduced by engineers
When users report any of these symptoms, you have a production issue. Having good language around these symptoms gives you clarity in your oversight duties. You'll want to make sure the appropriate stability measures are in place to track the stability of your software as they run in production.
Sometimes, teams can be gun-shy about production deployments. They might advocate for monthly deployments or after-hours deployment events with many hands on deck. This is technically unnecessary but commonly born from a previous unpleasant experience making changes to a production environment. After a deployment goes bad, developers can become hesitant, wary, and distrustful of the process because they consider it dangerous. But a large inventory of undeployed software is not only a large investment that isn't generating a return, but it is also a growing risk of unproven system changes. All departments that manage throughput understand the power of limiting work-in-process (WIP). Infrequent deployments queue up far too many changes waiting for a stressful, error-prone deployment event.
Ultimately, your two goals to achieve stability are: - Prevent production issues - Minimize undeployed software
You can measure these on the team's scorecard by tracking weekly metrics: - Number of deployments for the week - Number of production issues for the week (separated by severity) - MTTR (mean/average time to issue recovery/issue resolution)
As with overseeing bugs, as mentioned above, you can ask your team the same questions to drive the right behaviors. - "What features/changes are tested and ready for production?" - "What was the root cause of that production issue, and what are we changing so that type of issue can never happen again?" - "What should we strengthen about our environment so that we are able to resolve issues faster next time?"
As with quality, there is a minimum set of practices that every team should employ if you have the expectation of running a stable software system in a production environment. - Automated DevOps from day 1 of a new project (eliminate manual, monthly deployments) - Small releases - Runtime automated health checks (built-in self-diagnostics) - Explicit secrets management
When production issues crop up - and they will from time to time, the following practices enable your team to diagnose them quicker and come to a resolution. - Centralized OpenTelemetry logging, metrics, and traces - An APM (Application Performance Management) tool with a shared operations dashboard - A formal support desk tool with ticket tracking, anomaly alerts, and emergency alarms
If some of this sounded familiar, it's because many of them are the software parallel of practices to operate any other factory or assembly line. In a factory, if a part of the production line experiences an issue, it's an obvious alert with staff springing into action to resolve it locally before it becomes a factory outage. For more serious problems, emergency alarms stop the line and call everyone's attention to rally around the problem to get the production line back up and functioning. While the tools are different, the way of thinking is the same. Here are some questions to ask your team in order to gain insight into how these may or may not be implemented. - "Would you please give me a tour of our logs and telemetry that allow me to see how users are using our software?" - "How do we currently train a new team member to be on-call for production support, and what dashboards should they be looking at to ensure the software is functioning in a stable fashion?" - "What events currently trigger an alert, and what events currently trigger alarms? Who receives alerts and how? How do we all receive alarms?"
Increasing speed (productivity)Let's finally turn our attention to increasing speed. This is quite a bit of information to digest before we discuss productivity. But for good reason. With quality problems, our team is diverted to diagnosing and fixing those bugs rather than working on new changes and features. With stability problems, our team is yet again distracted from actually working on new changes because the production environment rightfully takes priority. Even if we staff dedicated systems engineers to be responsible for supporting the production environment, they typically can only operate a stable system. For high-scale systems, it's normal to constantly be changing the number of servers or cloud CPUs or Kubernetes pods based on the load. And it's normal to be watching the queue length as data flows to be sure it's being processed within established SLAs. But when errors are happening, and the system is not behaving as the systems engineers have to be trained to expect it to behave, those issues are escalated to the software development team. And that is where development capacity goes.
The power of prioritizing quality and stability first is that the result is 100% of your team's capacity actually going to the new work set before it. With this achieved, we can look at what then causes a team to be able to move fast when it is actually able to work on new software changes.
From an oversight perspective, I'd like to paint a picture of how to think about your team's productivity, throughput, or pace. Let's take an analogy of the Baja 1000 desert race. To do well in this race, you need to finish. That means you need to pick a pace that will not cause your driver or machine to expire. Then, you need to navigate well. If your drivers get lost or go off course, they drive many more miles than necessary. Picking a good course and staying on that course shortens the miles necessary to finish the race. Even so, an obstacle may emerge that needs a change of course because of new information learned. Finally, the drivers must drive FAST along the chosen route.
Let's apply this analogy to a software team. The Team Alignment Template has given us a tool to ensure everyone is clear on where we are going, that is, what business outcome we want to achieve. This is the finish line. Feature A or Feature B is not a finish line. Any individual feature is akin to a particular route on the race course. We are choosing Feature A because we reasonably believe that changing the software in this way will progress us toward our objective. But as we move along, we need to watch out for new information that would help us learn that Feature A might not be the progress toward our objective that we hoped it would be.
Let's pause now and tackle a fallacy that's been promoted heavily in our industry. That fallacy is that of the "Product Owner". The Scrum curriculum heavily touted the Product Owner as the role that knew the customer so well that he was to prioritize the backlog with items. Because the Product Owner had prioritized them, they were deemed to be the right software changes to make. In practice, so few teams have been able to find a person with that good of customer knowledge, that the role of Product Owner hasn't worked. The 2018 State of DevOps Report by Puppet Labs shared a study that teams using Product Owners had a batting average of about 333. In other words, the Product Owner was right 1/3 of the time. When the changes were put into production, they yielded the desired outcome. What's interesting is that another 1/3 of the changes put into production yielded no progress toward the business objective. And the final 1/3 of the changes actually hurt the performance of the software and moved the business away from its objective. These changes had to be hurriedly backed out.
In your oversight role, don't rely on anyone to be so prescient that you trust them implicitly to decide what changes to prioritize. Instead, think about it like any other department in the company. Measure the result and adjust based on the actual data you collect. This is another reason for prioritizing stability ahead of moving faster. The same practices that achieve stability yield a capability for collecting data used in business analysis for what features yield a desired result.
Now that we have a good mental model for how to increase speed towards a goal, we need to measure the current actual speed. You'll want to add more measures to the team's scorecard. Add weekly numbers that represent progress toward the business objective of the software. If the software is related to e-commerce, you may add daily revenue. If it's an inventory system, you may add numbers that are reported on executive team scorecards. This gives your software team ownership of targets that other executives see. And they can participate more fully in improving those business measures. When it comes to software-specific measures for the scorecard, I suggest these as a minimum. - Desired # of issues delivered per week - Current # of issues delivered this week - Average time an issue spends in each status
If you are just starting this type of measurement, you might not know what target to set for the Desired # of issues delivered per week. Go ahead and defer that until you've measured actuals for at least a month. An important principle on which these measures are dependent is commonality. The shipping industry is able to deliver any size or shape or object to a destination. That is because of packaging standards. There are envelopes, small boxes, long tubes, pallets, and even shipping containers. In software, no two features are the same. In previous decades, and still today, teams have attempted to use methods of estimation to get to numbers that could be relied on. No method of estimation today has reached that goal. If our work tracking system has some features in it that are 10x or 5x or 2x the size of other features, it's hard to get the team into a flow of consistent delivery. Again, other departments that measure throughput know that the work needs to be made common-sized in order to empower the team to shine and deliver at an increased rate.
In software, the practice to embrace is Feature Decomposition. In project management, there is a practice of Work Breakdown Structure. Breaking down units of work into smaller tasks is used widely elsewhere to make the work more approachable as well as manageable. Feature Decomposition is the Work Breakdown Structure in software. Guide the team to break down software changes into tasks that can each be reasonably completed in one day of effort. For some features, you will challenge the intended design in order to accomplish this. The result will be development tasks that are all roughly one day of work in the size of labor. And with a common-sized unit of work, you can measure the throughput. But measuring throughput isn't the only reason for doing this. Large software changes that are not broken down are typically where other problems exist. Faulty analysis, undefined architecture, incorrect assumptions, and undiscovered unknowns. Breaking down development work ends up exposing these hidden problems further increasing the quality of what is delivered. Breaking down the work forces more design work upstream of coding because design decisions will have to be made before the initial code. Starting the code on a feature that is too large mixes misses analysis and design conversations right into the middle of unfinished code when the developer gets to a point where he finds an unanswered question. Then, coding has to stop, and an impromptu meeting has to happen because coding on that feature is now blocked. You can safely assume that a feature or change that is expected to take several days to complete will not take several days. It will take 2x or 5x or 10x longer than that. The several days estimate is an estimate of no confidence. Only when you have an estimate of one day can you be certain that all needed work has been identified and understood well. In this process, you'll also see more detailed design diagrams since more knowledge will be flowing throughout the team. As you increase your team's delivery speed, here are some minimum practices to expect. - Kanban-style work tracking (a work board where items move from left to right) - Feature decomposition - Design diagrams - Refactoring
The last item in this list, Refactoring, is a mature practice. You can find very good books on this practice. It recognizes the reality that since we are going to learn from how our users use the software after the software has been built, we need to expect to make changes based on that learning. Refactoring is our method for making those changes. We are going to learn that a feature should behave and be designed differently. Refactoring is a means by which we change the software so that the feature becomes designed in a new way as if we had been designing it in that manner from the start. Here is a suggested question to ask when you learn something new from users in production. - "Since we need to change Feature A, what parts need to change so that the outcome is as if we intended to design it this way from the start?"
The lack of refactoring will compound over time into a code base that is hard to understand and hard to follow. Refactoring ensures that the code is always easy to understand at a glance.
As you measure your team each week, look for the current # of issues deliver each week to increase. You'll also notice bottlenecks to increased speed because you are measuring the average time an item takes in each of the statuses on your work tracking board. When bottlenecks are discovered, you resolve them. The lack of tracking time per status is what allowed bottlenecks to remain hidden.
With all this, you have a very strong oversight position for your software team. It will be important to keep quality, stability, and speed in the proper order. Schedule pressures can tempt teams to forget about this order to allow quality or stability to slip by tolerating shortcuts. But it doesn't take long for shortcuts to accumulate, resulting again in poor delivery speed. Whenever a bug makes it out to users, or whenever a production issue happens, reinforce to the team the importance of taking action so that this kind of bug or issue can never happen again. It's a logic journey. For a bug or issue to not happen again, it's not just about the team trying harder or gritting their teeth tighter. It's about real root cause analysis and changing the problem at the root so that it is impossible for the bug or production issue to happen in that way again.
Leading and strengthening the teamWhile you are enjoying your high-performing team, stay vigilant. Recognize when you need to go back and redo some of the parts of the process. When you add a person to the team. When a person leaves the team. Back up to stills assessment and inventory. Review the Team Alignment Template. Allow the now-changed team to form again, storm again, and norm again, so they can perform again. Keep them equipped. Make sure every member of your team has an avenue for ongoing professional development. Keep measuring the team. After all, "A" players look forward to the report card. Create an environment where "A" players can celebrate. Your less than "A" players will self-select themselves out. When you identify a "B" player, craft a professional development plan with them so that they become an "A" player. Your "A" players want to work with other "A" players, and they want to work in an area where it is normal to succeed. The working environment that you have crafted for them will empower them to succeed, and they won't want to work anywhere else. You will have created a team with longevity that has established quality, achieved stability, and is increasing its speed of delivery each day.
ConclusionYou have what it takes to oversee your team as a software executive. You can do it. By implementing these principles, leading your team with a scorecard of relevant measures, and putting into place these team practices, you will have a team that is an asset to your business. I know you can do it. And we are here to guide. May God grant you wisdom as you lead your team.
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey discusses the architecture of GPT-3, the technology behind ChatGPT, and how you should think about this technology in 2023.
Situation- ChatGPT is getting a lot of press because it's the first freely available implementation of GPT-3 that has captured the imagination of the masses. Many are pointing out the awesome and surprising capabilities it has while others are quick to point out when it provides answers that are flat-out wrong, backward, or immoral.
Mission- Today I want to raise up the conversation a bit. I want to go beyond the chatbot that has received so much press and look at the GPT-3 technology and analyze it from an architectural perspective. It's important that we understand the technology and how we might want to use it as an architectural element of our own software systems.
Execution Introduction- GPT-3, or Generative Pretrained Transformer 3, is the latest language generation AI model developed by OpenAI. It is one of the largest AI models with over 175 billion parameters, and it has been trained on a massive amount of text data. GPT-3 can generate human-like text in a variety of styles and formats, making it a powerful tool for natural language processing (NLP) tasks such as text completion, text summarization, and machine translation.
The GPT-3 architecture is based on the Transformer network, which was introduced in 2017 by Vaswani et al. in their paper “Attention is All You Need”. The Transformer network is a type of neural network that is well-suited for NLP tasks due to its ability to process sequences of variable length.
The GPT-3 model consists of multiple layers, each containing attention and feed-forward neural networks. The attention mechanism allows the model to focus on different parts of the input text, which is useful for understanding context and generating text that is coherent and relevant to the input.
The feed-forward neural network is responsible for processing the information from the attention mechanism and generating the output. The output of one layer is used as the input to the next layer, allowing the model to build on its understanding of the input text and generate more complex and sophisticated text.
To use GPT-3 in a C# application, you will need to access the OpenAI API, which provides access to the GPT-3 model. You will need to create an account with OpenAI, and then obtain an API key to use the service.
Once you have access to the API, you can use it to generate text by sending a prompt, or starting text, to the API. The API will then generate text based on the input, and return the output to your application.
To use the API in C#, you can use the HTTPClient class to send a request to the API and receive the response. The following code demonstrates how to send a request to the API and retrieve the generated text:
``` using System; using System.Net.Http; using System.Text;
namespace GPT3Example { class Program { static void Main(string[] args) { using (var client = new HttpClient()) { client.BaseAddress = new Uri("https://api.openai.com/v1/");
var content = new StringContent("{\"prompt\":\"Write a blog post about the architecture of GPT-3\",\"model\":\"text-davinci-002\",\"temperature\":0.5}", Encoding.UTF8, "application/json"); content.Headers.Add("Authorization", "Bearer API_KEY"); var response = client.PostAsync("engines/davinci/jobs", content).Result; if (response.IsSuccessStatusCode) { var responseContent = response.Content.ReadAsStringAsync().Result; Console.WriteLine(responseContent); } } } }} ```
End of demoFrom the start of this explanation, the text was generated by chat.openai.com. It can be pretty impressive. But, at the same time, it's very shallow. GPT-3 is a machine learning model that has been trained with selected information up to 2021. Lots of information, but selected, nonetheless. Here is the actual ChatGPT page that generated this. Notice that it admits that it doesn't have information past 2021.
Let's dig deeper, though on what GPT-3 is and how it came to be. Let's look at the theory behind it so that we can see if we should use it as an architectural element in our own software. - Let's go back to 2017. Ashish Vaswani and 7 other contributors wrote a paper called "Attention Is All You Need". In it, they proposed a new network architecture for their neural network. Simplify that and think of a machine learning model. They create a method that could be significantly trained in 3.5 days using eight GPUs and be ready for a complete transition from one spoken language to another. They tested it using English-to-French and English-to-German. Vaswani and other contributors were from Google Brain, four from Google Research, and one from the University of Toronto. - In 2018, four engineers from OpenAI wrote a paper entitled "Improving Language Understanding by Generative Pre-Training". They lean on Vaswani's paper and dozens of others. They came up with a new method for Natural Language Processing (NLP). They describe the problem of training a model with raw text and unlabelled documents. That is, if a model is trained by all available information in the world, it's a mess. Culture divides the world, and all queries posed to an ML model are in the context of culture. We have geographic culture, national culture, religious culture, trade culture, and more. And existing models have to painstakingly label all data before being fed into the model or they get mixed in with everything else. For example, take users in different countries as a stark example. In the US, where 70% of the population claim Christian as their religion according to the latest 2020 survey, if users receive answers condemning Christianity or criticizing it, that would be a poor user experience. In Afghanistan, however, where it is illegal to be Christian, the users would have a poor user experience if the model returned answers showing Christianity in a positive light. So from an architectural perspective, it's important to understand what GPT-3 is. Remember. It stands for Generative Pretrained Transformer 3. Pretrained is key. There are now several doesn't online services that have implemented GPT-3 and have trained a model. Text drafting and copyediting are already becoming popular. Video editing is growing. Understand that by taking on a dependency on one of these services, you are relying on them to train a model for you. That alone is a lot of work but can save you a lot of time. But inquire about the body of data that has been fed into the model so that you can make sure your users receive the experience you want for them. I gave one example of cultural differences between countries. But for software geared for children, there is a mountain of information on the Internet that you don't want to be in the model if it's generating responses for kids. Keep that in mind. ChatGPT has had to have bias injected into it because bias seems to be a more human trait than a computer trait. Time Magazine did a write-up on how OpenAI invested in a project to label and filter data used to train the model. In short, it was a big filtering operation. There is a lot of filth on the NET, so according to your own morality (another word for bias), that's a good thing. But I'm sure you will also find some areas where they inserted bias that you don't agree with. Again, it's all about training the model with labeled data that fits the culture of the users. Early users are circulating answers that seem fishy and serve as examples of the filtering project OpenAI commissioned. ChatGPT can draft blog posts and short statements as well. That's pretty cool. I'm Italian. My family immigrated from Sicily in 1910 to Texas, so I love this first example. "Write a tweet admiring Italians". The response is "Italians are a true inspiration - their rich culture, stunning architecture, delicious cuisine, and effortless style make them a marvel to admire 🇮🇹 #AdmiringItalians"
Wow, quite flattering. Then, you just go down the list and throw in some other races. The trainers of the ChatGPT model labelled data favorable to Italians, Asians, Hispanics, Indians, Blacks, and Whites. But it seemed to have a problem with that last one. So we can see that the model definitely has some different training there. Architecturally, you need to decide whether a 3rd party model out there is a fit for your software or whether you need to train a model that fits your users needs more specifically.Let's move on.
OpenAI is very well capitalized, and I expect very interesting things from them. Microsoft has announced an investment of $1B in 2019 into the company. With an investment like that I would also expect OpenAI technology to be well integrated with Microsoft Azure and .NET development tools. Microsoft has been expanding Machine Learning capabilities for a long time, but GPT-3 is groundbreaking. You can train models like ChatGPT in four days with eight GPUs and be ready to start testing. For some of you, you'll just want to call http APIs of some of the GPT-3 services. For others, you'll want to implement and train your own model so that you can label the data that is being fed into it to guide responses.- Elephant in the room: Is GPT-3 going to replace me as a programmer? Short answer: No. I've been around long enough to have seen every decade have a story of "this technology will make programmers obsolete". It hasn't happened, and it's not going to happen. The same thing can be said about mechanics. Even if every automobile is converted to electric or hydrogen or whatever, we'll still need mechanics to fix them and perform maintenance on them. Things change, but they don't go away. Now, the developers of the 90s who considered themselves to be HTML programmers have had to change dramatically because HTML programmers had a short run. Now, HTML is just a small portion of the skillset, and CSS radically changed HTML programming. Then Bootstrap, Material, and the other CSS frameworks radically changed it again. So the tools and how we use them will keep changing, but the need for people to design, implement, operate, and maintain software will still be there. But it's an exciting time to be a programmer. - Right now, even if your current software wouldn't benefit from the use of a GPT-3 model, you should add it to your toolbelt for you or your colleagues. For example, there are so many questions that we go to StackOverflow or web search. Or perhaps your users of some analytics database need help with a query. Now you have a new tool to help you draft query syntax. - Summary If you haven't looked into GPT-3, you'll want to. It's a big leap ahead in the field of Machine Learning. And it's capabilities can be a component that is a part of your Artificial Intelligence solution. Or just a part of an existing software system. I'd encourage you to read the research papers that describe it in more detail so you know how it's designed. After all, it's just software. You need to understand the capabilities of this new software component so you can choose how to use it to your benefit. There's nothing magical about it. It's just software, just like every other library and API service you currently use to do something.
I hope this has aided you in upleveling your understanding of GPT-3 and how to best use it in your own software.Attention Is All You Need Improving Language Understanding by Generative Pre-Training OpenAI API
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey discusses why so many teams are not happy with the pace of software delivery.
Situation Most software teams we see are not moving at the pace their companies would like. One of the Clear Measure Way tools is a self-assessment. It's easy to find on the Clear Measure website. One of the subjective questions included is "are you happy with the pace of delivery of your software team?". Most respondents are not able to answer YES. We're going to talk about that.
Mission- Many businesses have decided to have internal software development teams. Companies that are tech companies, have to. For others, it's a judgment call. Over the last 25 years, many non-technical companies have outsourced the creation of software. They lost a lot of money, didn't get what they thought they were going to get, and they have shifted to operating software engineering teams in-house. They still consider custom software to be strategic for them, but they want more control by hiring their own employees. But they are then frustrated that they don't actually have more control. They might have more visibility, but many are frustrated that having the in-house team doesn't actually increase the pace of delivery or solve every problem. The goal of this video is to go over the common categories of time suck that saps the capacity of software teams everywhere. My hope is that once you understand where all your team's time is going, you can make decisions to change that and redirect the effort to justify the progress you want.
Execution- There are five categories of work for a software team: - Working on new software - Diagnosing or fixing or reworking past work we thought was done - Diagnosing or fixing the software as it runs in a production environment - Administrative, non-software work - Time off
Working on new software
Diagnosing or fixing or reworking past work we thought was done
Diagnosing or fixing the software as it runs in a production environment
Administrative, non-software work
Time off
The Clear Measure Way encourages us to sequence the establishment of quality, then the achievement of stability in production, and then a focus on increasing the speed of delivery. We have to play some defense before we can focus on offense. Once we are focused on speed, if we haven't established the right level of quality, and if we haven't achieved good stability in production, we will be on the losing end of the capacity equation. Our team's capacity will be constantly stolen away from us. It's the bed we make, and we have to sleep in it. The good news is that it's our bed. There are straightforward, known practices for establishing quality. Known practices for achieving stability. We just have to put them in place.
Summary If your team hasn't been delivering at the pace you want, and you've struggled to describe why start measuring these five categories? Then you'll find what's stealing your capacity. And once you know where you are, you can build your travel plan for going to where you want to be.
Download the Team Alignment Template
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey discusses how to align a software team for high performance. Recognizing that the team's architect is the leader and has a big job to do, a tool called the Team Alignment Template facilitates the documenting and teaching of the team's purpose, values, and other strategic decisions so that all engineers can work and pull in the same direction.
Situation At the beginning of a project, when a new team is formed, or when the staffing of an existing software team changes, all team members need to align and get going in the same direction. Without intentionally achieving this, each team member will have a small or large difference in the vision for how to proceed. It's the job of the architect to make clear the path and to align all team members on that path in order to establish quality, achieve stability, and increase speed of delivery.
Mission The goal of the Team Alignment Template is to:
Execution The Team Alignment Template is a simple, 1-page document. After filling in the blanks, gather your team together and discuss the contents. Invariably, there will be discussion around some items in order to gain understanding. Any time the staffing of the team changes, and at monthly or quarterly boundaries, review the Team Alignment Template again.
Summary Without intentionally aligning a software team, each team member will have a small or large difference in how to proceed. This will result in reduced quality, stability risks, and missed opportunities for increasing speed of delivery.
Download the Team Alignment Template
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey discusses how to design new applications for automated DevOps. Automating the DevOps process from Day 1 is part of the "Achieving Stability" pillar of the Clear Measure Way.
Situation Once a software project or new application gets going, the focus tends to be on features. And once code is being written but not being deployed frequently, the team starts to slow down right from the get-go. It might be tempting to think that you don't need devops automation just yet. But choosing not to put in a particular process is implicitly deciding to put in a manual process. The first bit of code you have will end up being manually built, manually tested, manually deployed, and manually monitored. Then the team will work on more code and more code, and bug reports will start flowing in, and there will "never be a good time" to put in the devops automation.
Mission The purpose of this video is to show you how simple and clear automated devops can be and how straightforward it is to put it in at the beginning when you don't yet have any code. Then, you never have a point in time where you have to "stop the bus" and "stop delivering features" in order to catch up with technical infrastructure that would have been so much easier at the beginning. Because along the way, automated devops causes you to design features just a bit differently. And if features are designed without devops automation, you'll have to retrofit later.
Execution At the beginning of a new application, you need to think about 7 different areas of your DevOps environment
What you see on the screen is a DevOps architecture poster from Clear Measure. Lots of companies have used it. It clearly lays out the architecture of your DevOps automation.
Summary Companies that have adopted the measures and practices in the Clear Measure Way know the importance of DevOps automation and putting it in place on Day 1 of any new software application. I hope this helps you in your journey to establish quality, achieve stability, and increase your speed of delivery.
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey discusses how to empower software teams using the Clear Measure Way.
Context
Achieving rare success
Establish quality
Achieve stability
Increase speed
Lead your team
Exhortation
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey discusses the suggested engineering practices for achieving stability. After establishing quality, achieving stability is the next pillar in the Clear Measure Way along the path to increasing speed. Without stability, the software team will always be devoting some portion of its capacity to diagnosing and fixing stability issues with the software in production.
Priorities
Stability practices
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
In this episode, Jeffrey discusses using design patterns to increase speed. Speed is a pillar of the Clear Measure Way, just like establishing quality and achieving stability.
A design pattern is an idea. Code implementing it is merely an example of the idea
Resources: - [https://www.gofpatterns.com] - [https://learn.microsoft.com/en-us/shows/visual-studio-toolbox/design-patterns-commandmemento]
Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.
This program is syndicated on many channels. To send a question or comment to the show, email programming@palermo.network. We’d love to hear from you.
To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178
Amanda is a wife. A mother. A blogger. A Christian.
A charming, beautiful, bubbly, young woman who lives life to the fullest.
But Amanda is dying, with a secret she doesn’t want anyone to know.
She starts a blog detailing her cancer journey, and becomes an inspiration, touching and
captivating her local community as well as followers all over the world.
Until one day investigative producer Nancy gets an anonymous tip telling her to look at Amanda’s
blog, setting Nancy on an unimaginable road to uncover Amanda’s secret.
Award winning journalist Charlie Webster explores this unbelievable and bizarre, but
all-too-real tale, of a woman from San Jose, California whose secret ripped a family apart and
left a community in shock.
Scamanda is the true story of a woman whose own words held the key to her secret.
New episodes every Monday.
Follow Scamanda on Apple Podcasts, Spotify, or wherever you listen.
Amanda’s blog posts are read by actor Kendall Horn.