It’s not a fully controlled environment, that is the point of smokes.
It’s not a fully controlled environment, that is the point of smokes.
Polling is certainly useful, but at some point introducing reliability degrades effectiveness. I certainly want to know if the app is unreachable over the open internet, and I absolutely need to know if a partner’s API is down.
Wherever possible, this is a good idea. The campsite rule - tests don’t touch data they didn’t bring with them - helps as well.
However, many end to end tests exist as a pipeline, especially for entities that are core to the business function of the app. Cramming all sequentiality into single tests will give you all the problems described, but in a giant single test that you need to fish out the result for.
My experience with E2E testing is that the tools and methods necessary to test a complex app are flaky. Waits, checks for text or selectors and custom form field navigation all need careful balancing to make the test effective. On top of this, there is frequently a sequentiality to E2E tests that causes these failures to multiply in frequency, as you’re at the mercy of not just the worst test, but the product of every test in sequence.
I agree that the tests cause less flakiness in the app itself, but I have found smokes inherently flaky in a way that unit and integration tests are not.
My team has just decided to make working smokes a mandatory part of merging a PR. If the smokes don’t work on your branch, it doesn’t merge to main. I’m somewhat conflicted - on one hand, we had frequent breaks in the smokes that developers didn’t fix, including ones that represented real production issues. On the other, smokes can fail for no reason and are time consuming to run.
We use playwright, running on github actions. The default free tier runner has been awful, and we’re moving to larger runners on the platform. We have a retry policy on any smokes that need to run in a step by step order, and we aggressively prune and remove smokes that frequently fail or don’t test for real issues.
Get good at the three point turn.
This is a stable way to make changes on any system that has a dependency on another platform, repository, or system. It’s good practice for anything on the web, as users may have logged in or long running sessions, and it works for systems that call each other and get released on different cadences.
I know your point is that people should use real judgement, but that’s a great line to draw for people who need it.
Is naming consistency important enough to break compatibility? No, absolutely not.
We use a little bit of property testing to test invariants with fuzzed data. Mutation testing seems like a neat inverse.
I think the best thing to do with TDD is pair with or convince devs to try it for a feature. Coming at things test first can be novel and interesting, and it does train you to test and use tests better. Once people have tried it, I think it broadens your use of tests pretty well.
However, TDD can be a bit of a cult, and most smart and independent people (like people willing to work at a <20 person company) will notice that TDD isn’t the silver bullet it’s proponents make it out to be.
YAML works better with git than JSON, but so much config work is copy and pasting and YAML is horrible at that.
Having something where changing one line doesn’t turn into changing three lines, but you could also copy it off a website would be great.
JSON5 is a superset of JSON that supports comments.
XML to transform XML to import into more XML? Can’t we just have a config file that isn’t setting up some big tie in?
I don’t know if it’s actual json5, but eslint and some other libraries use extended, commentable json in their config files.
XML would be great if it wasn’t for the extended XML universe of namespaces and imports.
“It takes years” seems like the most reasonable alternative to forcing my coworkers to TDD or not merge code.
I think that getting people into the benefits of testing is something that’s really worthwhile. Building out some of these test suites - especially the end to end tests- were some really eye opening experiences that gave me a lot of insight into the product. “Submit a test with your bugfix” is another good practice - getting the error the first time prevents a regression from creeping in.
I’ve had some luck at using AI to get over the hump of the first “does this component work” test - it’s easy to detect stuff that needs to be mocked and put in stub mocks using GPT. GPT is horrible at writing good tests, but often it’s harder to write that first one than the meaningful tests.
His manager at least had the decency to warn him ahead of time about the PIP. Still - it seems mostly about forcing him out of his remote position.
deleted by creator
Github actions is good for us, but honestly just because that’s where our code is.
dependencies.