• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • In a way, this question itself is very “un-agile”. Agile should be always forward-looking, “What can we do next?”, “What can we get done in this short period of time?”, “What should we do next?”.

    OK, so you found a possible “defect” in your system. Is it a defect, or did someone slide in a requirements change 3 months ago?

    This reminds me of playing chess. Sometimes a player will make their move when their opponent is distracted. The opponent hears, “Your turn”, and they look at the board. “Which piece did you move?”. The first player just shrugs.

    The point is that you shouldn’t need to know which piece just moved. Every chess position is a “state” of its own, and your best move should depend on going forward from that state, not knowing how the board changed recently.

    It’s the same thing here. You have a situation. Does it really matter how, when, who or why it happened? It shouldn’t, and here’s why:

    Just because it’s a defect (if it is) doesn’t automatically mean that correcting it moves to the top of your “to do” list.

    It’s going to take some non-zero amount of time to change it back to blue. And when you’re doing that, you’re not doing something else. There is always an opportunity cost to doing bug fixes and you shouldn’t treat them in an ad-hoc way. Should you be spending that time, and who gets to decide if you do? It’s not your decision. It’s the PO’s decision to make.

    Maybe the PO doesn’t care about the colour. Maybe they do care, but not if it means some other feature gets delayed. Maybe it’s the most important aspect of the whole system, and there’s no way you can launch with it green. So you cancel the current Sprint and start a new one dedicated to fixing this defect! Maybe they regret asking for it to be blue, and now they’re happy that it’s now green.

    If it was me, I’d get a quick T-shirt size estimate on the work required to change it back to blue, then put it in the Product Backlog to be reviewed with the PO. Maybe have a quick chat with the PO, or send a memo about it. Maybe the PO will need to check with their SME to see if anyone remembers asking for it to be changed to green. Maybe not. In any event, it either makes its way into a Sprint Backlog or it doesn’t.

    Also, if you’re doing Agile right, then your clients are getting constant, hands-on, experience with your system as it is being developed. To go 100 days without some kind of “release” that they can play with and give you feedback is an anti-pattern. If you are giving them the latest version every week or two and after almost three months they haven’t noticed that the footer is green, then it’s probably not important.

    On to the actual question. Jira is a potential sand trap of administrative complexity. The answer is usually to keep everything smaller. Smaller features, and smaller Sprints. Smaller teams. A team of 5 or 6, working in one week Sprints, can only do so much per Sprint. If your fundamental unit of work - a story, or a feature, or a ticket - is set to take something like 1/2 day and forms the basis of your Sprint Backlog, then each programmer on the team can do at most 10 SB items (in a perfect world). Depending on the composition of your teams, you’re probably going to have only about 3-4 programmers - which means 30-40 tickets per Sprint Backlog. And that’s a blue-sky number that’s practically impossible in a world with meetings. A team of 5 or 6 is going to complete closer to 20 Sprint Backlog items in 1 week Sprint in the real world.

    The point being that 14 Sprints is your 100 days and each Sprint has about 20 easy-to-understand items in it. Whatever your management tools, it’s a failure if you can’t locate those 280 features in a matter of seconds. And it should be easy to eliminate 270 of them as not possible places where the change happened just from the description.

    And when those SB items are small, as they should be, it’s not an onerous task to document inside them the requirements that they are supposed to meet.

    When you have 1 month Sprints with tickets that take 2 weeks to complete, then everything becomes a nightmare. It becomes a nightmare because it’s virtually impossible to impose some kind of consistent organizational structure internally on free-form stuff that big. But it’s almost trivial to do it with tiny tickets.

    And the other thing that happens with big tickets is that there’s tons of stuff that programmers do without thinking about it too much. If you’ve got two weeks to finish something, then there’s a ton of latitude to over-reach and the time estimate was just a wild guess anyways. If you have 3 hours to do something, then you’re going to make sure that what you need to do is clearly laid out - and then you have to stick to it or you won’t get done in time.

    Did somebody “fix” the inconsistent footer colour while doing some huge, 2 week, ticket? Good luck finding out. But that’s not going to happen with tiny, well documented tickets.






  • Keep in mind that it has been decades since I last used Kermit, but I’m pretty sure the use case it was originally designed for was…

    Connect to a serial port, which had a modem attached. Talk to the modem and get it to dial a number. Presumably, the remote end answered and the port attached to its modem would issue a login prompt. Negotiate the login and then issue a bunch of commands to change directories and then launch Kermit on the remote system. After that Kermit to Kermit communications took over until you terminated the session. Finally, log off the remote system and hang up the modem.

    All of this stuff could be done via scripts. I seem to remember that it would actually wait for a response, and then parse the response in the script. I don’t remember ever doing polling loops.

    If you’re on a *nix box of some type, it’s totally possible to open up a serial port for manual I/O even in something like a bash script. Even if you have to reverse telnet to a terminal server.



  • Well, there are specific hardware configurations that are designed to be servers. They probably don’t have graphics cards but do have multiple CPUs, and are often configured to run many active processes at the same time.

    But for the most part, “server” is more related to the OS configuration. No GUI, strip out all the software you don’t need, like browsers, and leave just the software you need to do the job that the server is going to do.

    As to updates, this also becomes much simpler since you don’t have a lot of the crap that has vulnerabilities. I helped manage comuter department with about 30 servers, many of which were running Windows (gag!). One of the jobs was to go through the huge list of Microsoft patches every few months. The vast majority of which, “require a user to browse to a certain website” in order to activate. Since we simply didn’t have anyone using browsers on them, we could ignore those patches until we did a big “catch up” patch once a year or so.

    Our Unix servers, HP-UX or AIX, simply didn’t have the same kind of patches coming out. Some of them ran for years without a reboot.