/trasteando/

Playing around

All Posts

CAS 2015 Notes

[KY] Eugenio Moliní

[idea] comuniredes de troyanos
[ref] molini.es

[MS] Apertura - Carlos Ble

Don’t ask for permission.
Be the change you want to see.

[EP] Apertura - Aritz Suescun

[idea] The Design / Research Sprint
Inception -> book The Agile Samurai
[videos] Product Discovery -Marty Cagan

[EP] El sindrome de Niggle, la orientacion a objetos y la familia de Juan Carlos I - Gerard Chiva

Design = Do the minimum to maximize impact
Integrate changes & failures in your product
[quote] “Una obra no se completa nunca, solo te lleva al limited de tus posibilidades”

[MS] Continuous Delivery in a legacy environment - Peter Marshall

It is important to get instant feedback from production
Use CI/CD to share a common vision 1. Remove unnecessary human intervention 2. Monitor / Quality gates / Steady state (know your steady state and alert when the system deviates from it)

[ref] The Strangler Pattern - can be used to transform a monolith
‘Devops’ and ‘Tech Management’ are supporting teams at Planday company. Tech Mgmnr is kind of an architecture team. It is responsible of delivering the technical vision message.
[tip] Simplicity: one thing at a time.

[DP] Viaje a lo desconocido: de lo que no soy a lo que soy - David Roncero

Taller

[EP] El arte de decir que no - Carlos Hernandez

Some justifications used to push features to the product: * Customer X is about to quit if we do not provide feature Y => Do not implement it if 95% of users do not use it, otherwise you’ll end up with a Frankenstein * We can make it optional => The product may end up with a control panel like the Enterprise * There’s nothing else planned => Better use the time to revisit features already implemented * Everybody wants this => Show me the numbers * Competitors have it => … yes, but do competitors love it? * Someone else will build it => Have clear the “finish line” of your product (product vision)

Conclusions: * Have a clear vision of what you want to build * Talk to the users * Better the user to adapt to the product than the other way round * The user has to fall in love with what the product IS not with what the product COULD BE / WILL BE. * Benevolent dictatorship

[EP] El Big Data también el ágil (o debería) - Juan Tomás García

[ref] Kappa Architecture - Jay Kreps: https://github.com/milinda/kappa-architecture.com
The importance of Monitoring & Logging

[EP] Un paso más allá del pair programming: diseñadores empotrados - Poun Studio

Enables better communication: * non violent communication * common vocabulary

It enabled: * new ways to solve tasks * new tools: style guide

[EP] Design Thinking: the power to accept every challenge - Mariana Ivanova

The space: Freedom to explore & try things
The people: [ref] T-shape (broad soft skills & deep expertise skills)
The approach: Problem space => Solution space | Diverge -> Converge -> Diverge -> Converge

Steps: 1. UNDERSTAND (The challenge)
Goal: Creative reframing => One direction 2. OBSERVATION (The trap)
Goal: Gain empathy
Techniques: Interview: 5 whys & Debrief after the meeting
book Interviewing Users: How to Uncover Compelling Insights - Steve Portigal 3. DEFINE POINT OF VIEW (Agree on the problem to solve) * Goal: Make sure you are working on the same problem * Techniques: * Storytelling * Clustering * Create Persona * Define POV (= user + need + insight) * Tips: * Do not design for everyone => one thing at a time * Do not confuse solutions with needs
——– PROBLEM SPACE ^ - SOLUTION SPACE v ——– 4. IDEATE * Techniques: * Brainstorming: * Go for quantity * Go for wild ideas * Defer judgement * [idea] Reverse Brainstorming * [ref] The Power of Bad Ideas - Steve Portigal 5. PROTOTYPE (Fail early & often - because it is easy & fast)
Techniques: [idea] dark horse prototype 6. TEST
Negative feedback is the best feedback
book The Mom Test: How to Talk to Customers and Learn If Your Business Is a Good Idea When Everyone Is Lying to You - Rob Fitzpatrick 7. ITERATE
From Test get back to previous steps
Every failure is essential to learning
Find the kid inside yourself (open mind)
DO NOT FALL in love with the process
DO FALL in love with the problem

[KY] Leo Antoli

Correlation != Causation
book Spurious Correlations - Tyler Vigen
Cognitive biases
The problem is software development usually is not a lack of resources, but a lack of knowledge
[tip] Use better product monitoring & exception handling
[tip] small commints => to PRD continuously. If something broke: * easy to identify * easy to rollback

[tool] screenhero.com
Technical debt = conscious decision / you have to pay at some point != shitty code
There’s always tension between - and some equilibrium need to be found
* build the right thing * build the thing right * build it fast

[ref] The Standish Group - Chaos Report
[tip] In meetings, first each one writes down its opinion in order to avoid being biased by the first person speaking
[ref] No true scotsman fallacy
Urgent vs Important
[ref] Anecdotal evidence

[KY] Rachel Davies

Spend your own time learning more: * get insights on bad parts

Use people who are happy to try out things * test the idea * don’t wait for everybody to accept the idea

Build time for learning

Share what you have learned

Involve everyone

Rachel works at unruly.co, here are some insights on what they are doing: * CD: from workstation to PRD: * no STG * Automated tests * 20% time learning & researching: * learning is a currency (truly, they have notes) * swap teams / organizations * learn from PRD: monitoring * pair & mob programming: * challenging pieces * whenever agreement is needed * strandcast: research news. recorded * sharing learning: [idea] use a speaking token * retrospectives * sit with your business * coding dojos * swap organizations: * learn from each other * exchange with other teams & organizations * take turns => share responsibilities * track & reflect: i.e. track that learning is happening (i.e. in a calendar everyone shows what they’ve done during the week: refactor / backlog work / learning / mob-pair programming

[tip] invest time learning more => then write & present (i.e. about something you learned in CAS)

Agile coach mission: encourage people to try what they want to do / achieve

[TO] Gestion del cambio y hacking cultural - Angel Medinilla

[tool] http://www.myhappyforce.com/ - measures employees happiness
[ref] http://www.improvement21.com/
book Switch: How to Change Things When Change Is Hard – Chip Heath & Dan Heath
book Predictably Irrational: The Hidden Forces That Shape Our Decisions – Dan Ariely
[video] charlas de Emilio Duró
Herramientas para cambiar la cultura: * storytelling * small things, repeated 1k times * pain driven facilitation * [tool] https://www.hoshinplan.com/ => vision * use early adopters => not mandatory! * labs: 1 afternoon every 2 weeks * experiments => kaizen board * [idea] champion skeptic * [idea] agile corner / agile safari * script it * action triggers / existing habits

[DP] Soy Persona (no soy recurso) - David Fernández

book Work Rules!: Insights from Inside Google That Will Transform How You Live and Lead – Laszlo Bock
Company owners should build people

Confidence is bidirectional:

  • Freedom => Self-managed teams
    • heterogenity
    • fellowship: pair / mob programming
    • leadership: different people can lead in different aspects
    • interconnection
  • Motivation => Engaged employees
    • Goals
    • Development
    • Communication => feedback
    • Investment
  • Happiness => Happy & Productive people
    • Smile
    • Concilliation
    • Schedule
    • Kindness

[MS] Dando amor a los tests - Joaquin Engelmo

Cliente != Usuario
[video] Robert C Martin - Clean Architecture and Design => Framework isolation
[tip] to have a bad test base is worst than no having one Code coverage != test quality
DRY even in tests: builders, mothers (mothers know about scenarios), [idea] mock providers / factories

Look at other test doubles than mocks (mocks everywhere): * [ref] sociable unit tests: unit tests that not only talk with mocks (i.e dummies, etc.) * helper & utilities do not have side effects => there’s no need to mock * value objects do not have side effects => there’s no need to mock * Use runners & dependency injection

Tests smells
Name the tests properly: * the failure reporting is based on the test name * if the name is not descriptive enough I’d need to check the code

Improve readability: * AAA: Arrange / Act / Assert => separate these phases with a blank line * General vs specific setup: specific may be better some times in order to not lose context => better encapsulation * DSL => private method / no libraries are needed * One logical assert per test => create custom matchers / asserts

Separate unit / integration tests: even in different folders

[video] Alf Rehn (Åbo Akademi University) - How To Save Innovation From Itself

[CE] Desarrollando tus capacidades a través de la improvisación - Pablo Rodríguez

Improvise == to be ready for the unplanned
Do not judge, do not judge yoourself
There’s no need to be original => create with what you have at hand
Unblocking sentence: “Yes, and also …”
Everybody has something to bring in
[technique] the you can not say no game

Follow the error edge => learn * an error means that you have gone too far * no error means that you haven’t tried enough

Team: * you shine when others shine => supporting each other * be yourself, because the others are already invented => bring in your own universe => something bigger will be created

Listen: * you may realize that someone else has the same problem or knows how to solve it * open listening * it is important too, to make you understand

Living the present: * accept the new paths [technique] the intervened story

Simplify: Less is More => an specific action attracts attention

book Improv Wisdom: Don’t Prepare, Just Show Up – Patricia Ryan Madson

book Improv-ing Agile Teams: Using Constraints to Unlock Creativity – Paul Goddard

cas, conference

Building and Deploying Microservices With Event Sourcing - by Chris Richardson @ InfoQ

I’ve just watched this presentation by Chris Richardson about Building and Deploying Microservices with Event Sourcing.

These are notes to myself taken while watching the presentation:

Monolithic apps lock you in the technology you choose at the beginning of the project.

Events used to achieve eventual consistency across distributed services / datastores:

  • Microservices publish events when state changes
  • Microservices subscribe to events
    • maintain eventual consistency (multiple aggregates in multiple datastores)
    • synchronize replicated data
  • An event needs to be published atomically every time a domain entity changes its state

Event sourcing: for each aggregate persist the events that lead to a particular state instead of the state itself

Persisting events: json is a good choice because of its loose mapping mechanism

Optimize by using snapshots:

  • serialize a memento of the aggregate
  • load latest snapshot + subsequent events

Business benefits of event sourcing:

  • built-in audit log
  • enables temporal queries
  • preserved history

Technical benefits:

  • No more O/R mapping - we are just persisting events

Drawbacks:

  • Handling duplicate events / out-of-order

Think of a microservice as a DDD aggregate

For view requests use CQRS & denormalized views

Use an Event Archiver that subscribes to all events: enables analytics.

cqrs, docker, event, event sourcing, presentation, reading

Going Polyglot the Easy Way - by Wojciech Ogrodowczyk @ SCBCN15

I’ve just watched this presentation by Wojciech Ogrodowczyk about Going Polyglot the Easy Way.

The slides used during the presentation can be found here.

These are notes to myself taken while watching the presentation:

  • It’s better if you can use the language you are learning in your daily job than spending time om evenings and weekends.
  • Look an opportunity to introduce the new language in your job.
  • Use benchmarks to prove that the language fits the job.
  • Do small changes: show results fast / throw away code
  • Internal tool is a good place to start: issues will be reported / users will be forgiving
  • Short-live code (i.e. migration code) is another good place to start.
  • Do not mess around with mission critical systems
  • Some places where introduce the new language:
    • Publish / Subscribe
    • SOA
  • Learn something outside of your comfort zone: i.e. if you are used to backend learn a front end language (ELM, Clojurescript)
  • ELM has “time travel” capabilities: you can record sessions & reply it later: debugging / triaging
  • Fix vs Growth mindset

languages, learning, presentation, reading, scbcn15

Masterclass - Dobles De Test - by Xavi Gost @ Devscola

I’ve just watched this presentation by Xavi Gost about Masterclass - Dobles de Test.

These are notes to myself taken while watching the video:

  • The type of test double to use should be dictated by the intention of use.
  • Test doubles:
    • Dummy: we only need it to be present.
    • Stub: returns fixed values.
    • Mock: more complex scenarios.
  • Mocks are evil. When using a mock you are acknowledging a lack of ability / know-how in the design / architecture.
  • Mocks introduce complexity in a part of system (tests) that doesn’t deliver value to the customer.

Here’s a mind map of the references used in this presentation:

presentation, reading, tdd, test doubles, xavi gost

Testing and Refactoring Legacy Code - by Sandro Mancuso

I’ve just watched this presentation by Sandro Mancuso about Testing and Refactoring Legacy Code.

In this video the trip-service-kata is used.

These are notes to myself taken while watching the video:

  • Wrap code with tests before modifying it. If the code is not covered by tests perform IDE auto refactorings only.
  • Start testing from the lower nesting level to the deepest.
  • Start refactoring from the deepest nesting level to the shorter.
  • In the case of static methods, singletons or instances being created inside a method: create seams to make code testable. Seams can be overriden in order to obtain an intermediate testable class.
  • When working with legacy code use code coverage in order to verify that your tests cover the code they are created for.
  • Use builders to make tests more readable.
  • Refactor staying on the green side of red-green-refactor cycle as much as you can.
  • Try to convert conditionals to guards.
  • Initially get rid of variables whenever possible without considering potential performance issues caused by calling methods multiple times. Variables can be reintroduced later on if needed.
  • Your code should reflect the language in your tests if possible.
  • The language in your tests and code should be the domain language.
  • If the design is wrong - when adding tests we are perpetuating the wrong design!
  • Wrap static methods in instance methods. In your code base start replacing static calls to instance calls, until the static call is not used anymore: then you can get rid of the static method.

kata, legacy, presentation, reading, refactoring

What to Look for in a Code Review - by Trisha Gee @ JetBrains Upsource Blog

I’ve just read this post about What to look for in a Code Review.

These are notes to myself.

Automatize these:

  • Formatting
  • Style
  • Naming
  • Test coverage

Instead look for:

Design

  • Fit with the overall architecture?
  • SOLID principles, Domain Driven Design
  • Design patterns used. Are these appropriate?
  • Does this new code follow the current practices? Is the code migrating in the correct direction?
  • Is the code in the right place?
  • Could the new code have reused something in the existing code? Does the new code provide something we can reuse in the existing code? Does the new code introduce duplication?
  • Is the code over-engineered? YAGNI?

Readability & Maintainability

  • Do the names actually reflect the thing they represent?
  • Can I understand what the code does by reading it?
  • Can I understand what the tests do?
  • Do the tests cover a good subset of cases? Do they cover happy paths and exceptional cases? Are there cases that haven’t been considered?
  • Are the exception error messages understandable?
  • Are confusing sections of code either documented, commented, or covered by understandable tests (according to team preference)?

Functionality

  • Does the code actually do what it was supposed to do? Do the tests really test the code meets the agreed requirements?
  • Does the code look like it contains subtle bugs, like using the wrong variable for a check, or accidentally using an and instead of an or?

Have you thought about…?

  • Security
  • Regulatory requirements that need to be met?
  • Does the new code introduce avoidable performance issues
  • Does the author need to create public documentation, or change existing one?
  • Have user-facing messages been checked for correctness?
  • Are there obvious errors that will stop this working in production?

code-review, reading

Optimizing the Sustainable Pace - by Paul Pagel @ 8th Light

I’ve just read this post about Optimizing the Sustainable Pace.

These are notes to myself.

By doing the simplest thing that could possibly work, we’re exposing ourselves to the incidental complexity

The idea of sustainability assumes that resources are finite but cyclical. When you exhaust some amount of a resource, it can replenish itself only after a similar investment in its restoration.

The article refers to Out of the Tar Pit - by Ben Moseley & Peter Marks

agile, reading, xp

A Tech Lead Paradox: Technical Needs vs Business Needs - by Pat Kua @ thekua.com@work

I’ve just read this post A tech lead paradox: technical needs vs business needs about the conflict between business and technical needs:

A business will always put pressure on a development team to produce as much software as possible. At the same time, effective delivery of software is not possible without addressing some level of technical needs – such as technical debt, deployment pipelines, or automated test suites.

The article proposes the following practices to deal with the conflict:

  • champion time for technical needs
  • explain the business benefit of each technical need in order to build trust with non-technical people
  • work on high impact items first
  • keep a balance
  • maximize the use of ‘quiet’ periods

The article refers to Embracing Paradox - by Jim Highsmith

prioritization, reading

Slides for My ‘Dependency Injection Smells’ Talk - by Matthias Noback @ PHP & Symfony

I’ve just read these slides about dependency injection smells: Slides for my ‘Dependency Injection Smells’ talk

These are notes to myself.

Dependency injection smells:

  • static dependency
  • missing dependency auto-recovery
  • hidden dependencies
  • creation logic reduction
  • factory methods
  • programming against an implementation
  • dependencies prohibited

Keep in mind:

  • be clear and open about your dependencies
  • require only a minimum amount of dependencies
  • develop with your users (other developers) in mind

dependency-injection, php, presentation, reading, symfony

What Is Strategic Product Management? – Solving Day to Day Problems With the Long Term in Mind - by Vasco Duarte @ Software Development Today

I’ve just read this post about strategic product management: What is Strategic Product Management? – solving day to day problems with the long term in mind

These are notes to myself:

Sometimes is easy to lose sight on the long term direction:

One team that I worked with at some point coined the phrase “being a slave to the backlog” to describe the feeling of powerlessness, and being imprisoned in the relentless rhythm that took them from story to story through overtime and much stress without a clear vision or direction.

We must always:

  • begin with end in mind
  • define the product vision
  • regularly review those base on the feedback collected throughout development

Strategic product management focuses on:

  • strategy
  • portfolio
  • roadmap

product-management, reading

Mocks vs. Stubs - It Has Nothing to Do With the Implementation - by Jason Gorman @ Software People Inspiring

I’ve just read this post about the difference between mocks and stubs: Mocks vs. Stubs - It Has Nothing To Do With The Implementation

According to the post:

If we want our test to fail because external data suppled by a collaborator was used incorrectly, it’s a stub.

If we want our test to fail because a method was not invoked (correctly) on a collaborator, it’s a mock.

mock, reading, stub, tdd

Tests Should Test One Thing (?) - by Jason Gorman @ Software People Inspiring

I’ve just read this post about TDD best practices: Tests Should Test One Thing (?)

The author states that

unit tests should have only one reason to fail

and there are several good reasons for this:

  • Small tests tend to be easier to pass, facilitating “baby steps” approach to development
  • Small tests tend to be easier to understand, serving as clearer specifications
  • Small tests When failing, make it easier to pinpoint the problem

best-practices, reading, tdd

Throw Defect - by Nat Pryce @ Mistaeks I Hav Made

I’ve just read this post about an interesting way to make explicit programmer errors: Throw Defect

This can be particularly useful to catch errors that are not expected to happen. Here’s an example from the original post:

1
2
3
4
5
6
7
Template template;
try {
    template = new Template(getClass().getResource("data-that-is-compiled-into-the-app.xml"));
}
catch (IOException e) {
    // should never happen
}

The ‘should never happen’ comment block can be replaced - thus making the error explicit - in the following way:

1
2
3
4
5
6
7
Template template;
try {
    template = new Template(getClass().getResource("data-that-is-compiled-into-the-app.xml"));
}
catch (IOException e) {
    throw new Defect("could not load template", e);
}

exceptions, java, reading

Misadventures With Property-Based TDD: A Lesson Learned - by Nat Pryce @ Mistaeks I Hav Made

I’ve just read this post about TDD best practices & lessons learned: Mistaeks I Hav Made: Misadventures with Property-Based TDD: A Lesson Learned

Through a TDD example it reaches the following conclusion:

When working from examples, we start with specifics and then generalise, by adding contradictory examples. With property-based tests it seems better to start with very general properties and then specialise.

Interestingly the tests used in the example are based on factcheck

A simple but extensible implementation of QuickCheck for Python 2.7 and Python 3 that works well with Pytest.

best-practices, reading, tdd

What Not to Do in a TDD Pair Programming Interview - by Jason Gorman @ Software People Inspiring

I’ve just read this post about TDD best practices: What Not To Do In a TDD Pair Programming Interview - Software People Inspiring

Although initially focused on pair programming interviews, the recommendations also apply to your day-to-day TDD flow.

Here’s summary of DONT’s:

  • start by writing implementation code
  • introduce speculative generality (create code we don’t need to pass the tests)
  • write weak or meaningless tests
  • write redundant tests
  • not running the tests
  • not refactoring when it’s obviously needed
  • hack away at the code when ‘refactoring’
  • write one test that asks all the questions

reading, tdd

Are They the Same? Kata Java Solution

Today I was practicing with the Are they the same? kata from www.codewars.com

Here’s a summary of the steps I followed to solve it.

Description

The kata goal is

Given two arrays a and b write a function comp(a, b) (compSame(a, b) in Clojure) that checks whether the two arrays have the “same” elements, with the same multiplicities. “Same” means, here, that the elements in b are the elements in a squared, regardless of the order.

The full description can be found at www.codewars.com

java, kata Read on →