Add APPROVAL TESTING To Your Bag Of Tricks

  Рет қаралды 15,515

Continuous Delivery

Continuous Delivery

Жыл бұрын

Approval Testing is a great tool to add to your bag of tricks as a software developer, it is the easiest way to protect code that you want to change so that you can make those changes with confidence. Here are some real approval test demos to show Approval Tests in action, and an explanation of where and when to use them.
In this brief Approval Testing tutorial Dave Farley, author of Continuous Delivery and Modern Software Engineering, explores the fundamentals of how Approval tests work, when and how to use them effectively, and when not to use them.
-
⭐ PATREON:
Join the Continuous Delivery community and access extra perks & content!
JOIN HERE ➡️ bit.ly/ContinuousDeliveryPatreon
-
🎓 Learn techniques like Approval Testing, Refactoring and Decluttering and watch me apply them to some very nasty code. You can work along with me FOR FREE to learn how to make the bad code testable. Find out more here ➡️ courses.cd.training/courses/r...
-
👕 T-SHIRTS:
A fan of the T-shirts I wear in my videos? Grab your own, at reduced prices EXCLUSIVE TO CONTINUOUS DELIVERY FOLLOWERS! Get money off the already reasonably priced t-shirts!
🔗 Check out their collection HERE: bit.ly/3vTkWy3
🚨 DON'T FORGET TO USE THIS DISCOUNT CODE: ContinuousDelivery
-
🖇 LINKS:
🔗 "Approval testing explained": ➡️ www.softwaretestingmagazine.c...
🔗 "Approval testing library": ➡️ approvaltests.com/
🔗 "Approval testing - what it is and how it helps": ➡️ techtalk.at/blog/approval-tes...
🔗 "What is Approval Testing": ➡️ www.linkedin.com/pulse/what-a...
🔗 “Painless Visual Testing”, Gojko Adzic: ➡️ • Painless visual testin...
🔗 "Graphical Approval Testing": ➡️ github.com/AppraiseQA/apprais...
🔗 “Approval Testing Training Course”, Emily Bache: ➡️ training.techtalk.at/training...
-
BOOKS:
📖 Dave’s NEW BOOK "Modern Software Engineering" is available as paperback, or kindle here ➡️ amzn.to/3DwdwT3
and NOW as an AUDIOBOOK available on iTunes, Amazon and Audible.
📖 The original, award-winning "Continuous Delivery" book by Dave Farley and Jez Humble ➡️ amzn.to/2WxRYmx
📖 "Continuous Delivery Pipelines" by Dave Farley
Paperback ➡️ amzn.to/3gIULlA
ebook version ➡️ leanpub.com/cd-pipelines
NOTE: If you click on one of the Amazon Affiliate links and buy the book, Continuous Delivery Ltd. will get a small fee for the recommendation with NO increase in cost to you.
-
CHANNEL SPONSORS:
Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ bit.ly/3ASy8n0
Sleuth is the #1 most accurate and actionable DORA metrics tracker for improving engineering efficiency. Sleuth models your entire development cycle by integrating with the tools you already invest in. You get a full and accurate view of your deployments, see where true bottlenecks lie, and keep your team’s unique processes and workflows. With accurate data, Sleuth surfaces insights that your engineers can act on to improve - with real impact. ➡️ www.sleuth.io/
Roost, An Ephemeral DevOps Platform, automates your DevOps pipeline. It creates ephemeral DevOps environments on-demand or based on pull requests. Roost reduces DevOps complexities and shortens release cycles with fewer engineers. ➡️ bit.ly/CD2Roost
IcePanel is a collaborative diagramming tool to align software engineering and product teams on technical decisions across the business. Create an interactive map of your software systems and give your teams full context about how things work now and in the future. ➡️ u.icepanel.io/1f7b2db3
Tricentis is an AI-powered platform helping you to deliver digital innovation faster and with less risk by providing a fundamentally better approach to test automation. Discover the power of continuous testing with Tricentis. ➡️ bit.ly/TricentisCD
TransFICC provides low-latency connectivity, automated trading workflows and e-trading systems for Fixed Income and Derivatives. TransFICC resolves the issue of market fragmentation by providing banks and asset managers with a unified low-latency, robust and scalable API, which provides connectivity to multiple trading venues while supporting numerous complex workflows across asset classes such as Rates and Credit Bonds, Repos, Mortgage-Backed Securities and Interest Rate Swaps ➡️ transficc.com

Пікірлер: 57
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I have a FREE TUTORIAL on Refactoring and the use of Approval Tests that you can access here ➡courses.cd.training/courses/refactoring-tutorial
@jimhumelsine9187
@jimhumelsine9187 Жыл бұрын
I mostly work in legacy code that I didn't write, so I write a lot of characterization tests. My characterization tests look a bit like unit tests, but they are different. Unit tests define/specify the behavior that's desired before implementing it. Then we know the implementation is complete when all of the definition/specification behavior based tests pass. Characterization tests have a different order. The legacy code may be months or even years old. Git Blame reveals that the developer is a ghost in the system, since you may not even recognize the name. We can't write unit tests based upon behavior. We probably don't even know what the behavior is. We just know that it works. For my characterization tests, I start with the traditional GIVEN and WHEN portions of the test, and I run them. Then I add as many asserts and verifications in the THEN portion as I can. Asserts are usually based upon what's actually returned from the implementation and any state changes to the class being tested. Verifications are usually based upon interactions with other components. This is all highly implementation based, because it may be the only information that I have. Characterization tests help us explore the legacy code and reveal its behavior and secrets. It gives us a way to codify this discovery process. Characterization tests are brittle, as has been mentioned elsewhere in the comments, because they tend to be implementation specific. However, as Dave pointed out, they provide a safety net to refactor and make the intent more obvious. Once intent becomes more obvious via refactoring, we may want to replace the characterization tests with tests that are more akin to behavior based tests. This will especially be true if there's a redesign. As for bugs, I've found a few via characterization tests. I also leave them in. However, I change my test name slightly. When I encounter behavior that seems off, I'll document it via a test with the name along the lines of: methodX_ReturnsY_WhenZ_SHOULD_IT(). My test name, which is usually a statement, becomes a question. I will create a ticket that focuses upon the test for hopefully someone who knows the domain better than I do. The questionable behavior cannot be denied. The SHOULD_IT test documents it. If it is the desired behavior, then we only need to remove "SHOULD_IT" and we have a regular test. If the behavior is incorrect, then we create new tests that document the desired behavior. They will fail. Then the code should be updated to make the new tests pass and the SHOULD_IT tests should now fail. They can be removed once the behavior has been updated. Not only are characterization tests brittle, but they tend to be ugly and complicated. This is not an issue with the test. It's a reflection of legacy code that's ugly and complicated. When unit tests are nasty, then that's an indication that you should probably consider some refactoring or redesign effort in the implementation.
@FredrickIrubor
@FredrickIrubor Жыл бұрын
apt insight ✔
@jangohemmes352
@jangohemmes352 Жыл бұрын
Excited to see actual code with live commentary pop up in these vids! It's a welcome addition to the otherwise high-level commentary, which has me struggling sometimes to figure out how to apply the concepts.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
Check some of the older videos, there are lots with code.
@Immudzen
@Immudzen Жыл бұрын
Approval testing has been extremely useful for me. We took code that had no testing and added approval tests to the system. Once we did that we refactored piece by piece and added unit tests as we broke things up. As we went we found real bugs and would then discuss fixing them and what impact that would have on usage of the system. We could then fix that bug, update the approval tests, and repeat the process. It worked well and I don't know of any other method that would have worked as well to deal with the spaghetti mess of code.
@MrCalyho
@MrCalyho Жыл бұрын
We normaly call them snapshot testing.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I generally prefer Michael Feathers' name "Characterisation Tests", but "Approval Tests" as a term, seems to be winning at the moment, at least inside my filter-bubble.
@MrCalyho
@MrCalyho Жыл бұрын
@@ContinuousDelivery I always thought the term ApprovalTest got popularised by the GildedRose. The term 'snapshot' test is also applied a bit inconsistently with different techstacks calling it different things. For the Android and iOS guys it means screen shot, for the React and web guys it means text generated from a render and the API guys will use it as a 'snapshot' from the API output. Or it least thats how they use it my area :D. This difference in opinions always lead to 'bug descussions'. I like the terms Approval test(and Characterisation tests) far better.
@zeropaper
@zeropaper Жыл бұрын
Sounds to me like regression testing too.
@ProgramArtist
@ProgramArtist Жыл бұрын
We call them 'signature testing' for some reason. I guess it can be because of the similarity to the contract tests.
@haskellelephant
@haskellelephant 11 ай бұрын
Python ecosystem also calls it snapshot tests, the file the result is being stored in is the snapshot. I also disregarded it as being less useful when I first heard about the concept several years ago as, at the time, it was pitched to be used in a new system where requirements were very fluid. However, in legacy systems where behavior will see little (intended) change and there isn't enough test coverage it is a great tool. My personal opinion is that if the modernization has a large scope and cannot be achieved by refactoring alone (say changing from a homemade database to an off the shelf product) it makes sense to use strangler fig instead where the new code can be tested alongside the old code instead of assuring that they produce the same result through a snapshot.
@grrr_lef
@grrr_lef Жыл бұрын
One situation where this was useful for me outside of legacy systems: We were writing a tool that would analyse time series data for solar power plants. The output was an a collection of time series representing certain estimations. We were also improving/developing the estimation while implementing it. So there was no "correct" output to use as expectations in our test. Still there was of course a need for refactoring during the development. So we would simply run the tool, turn the results into fixtures and used these to ensure that our refactoring did not change the behaviour.
@ZeDlinG67
@ZeDlinG67 Жыл бұрын
I really like this hands on approach with the "nice" examples :D
@emilybache
@emilybache Жыл бұрын
Good intro to Approval testing. Thanks for the mention, Dave.
@jimmyhirr5773
@jimmyhirr5773 Жыл бұрын
J. B. Rainsberger has written extensively about this practice, but he calls it Golden Master testing.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
🎓 Learn techniques like Approval Testing, Refactoring and Decluttering and watch me apply them to some very nasty code. You can work along with me FOR FREE to learn how to make the bad code testable. Find out more here ➡ courses.cd.training/courses/refactoring-tutorial
@barneylaurance1865
@barneylaurance1865 Жыл бұрын
It could be very useful to use mutation testing with these sort of approval tests before deciding when you're ready to start refactoring.
@brownhorsesoftware3605
@brownhorsesoftware3605 Жыл бұрын
I've always heard this called regression testing. We had an interesting case at Realia COBOL because the compiler itself was written in COBOL. So you couldn't simply compare outputs, you had to compare the output of the output. I myself worked on the debugger and the compiler frontend person was my best user. When she had a problem I'd be debugging the debugger debugging the compiler compiling the compiler.
@qj0n
@qj0n Жыл бұрын
Regression tests is something different, it's kind of running the old acceptance tests for changed code. Here's a sentence from wikipedia about characterization tests: "Unlike regression tests, to which they are very similar, characterization tests do not verify the correct behavior of the code, which can be impossible to determine. Instead they verify the behavior that was observed when they were written." Also worth mentioning that "characterization tests" are not exactly "approval tests". Approval tests are any tests, where human approves the results, usually only once and later human is asked for any changes. Characterization tests are tests, where old code is treated as an oracle, so we assume it's correct. However, they usually use the same frameworks, so they're often mixed up
@brownhorsesoftware3605
@brownhorsesoftware3605 Жыл бұрын
​@@qj0n Regression tests test only for regressions, not correctness. Compliance or conformance tests test for correctness.
@qj0n
@qj0n Жыл бұрын
@@brownhorsesoftware3605 so wikipedia is wrong here? That's a claim that would require some sources to back up
@brownhorsesoftware3605
@brownhorsesoftware3605 Жыл бұрын
@@qj0n I am only speaking from decades of experience working on compiler, jvm, and platform internals. Perhaps the wikipedia author had a different experience. Please note the first words of my original reply were "I've always heard".
@qj0n
@qj0n Жыл бұрын
@@brownhorsesoftware3605 sure. We need to remember, that most of terms are coined in academic papers and they are clearly defined there, but when science become engineering, people tend to use the words more loosely. But if we want to define some naming system and communicate between different environment we probably need to take step back and see the source definitions Which BTW, it's also something missed in this video, as according to what i can find, approval tests and characterization tests are not the same
@user-zd6hb7jn5s
@user-zd6hb7jn5s Жыл бұрын
Dave: Randomly roasts some poor soul's open source code. Me: Pauses video to make sure it isn't mine.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I downloaded this code from a web site called "shitcode" some years ago, I have always considered it "pre-roasted" 😉
@mknights5618
@mknights5618 Жыл бұрын
As with some other comments, I have referred to these as regression tests or snapshot tests. Regardless of the name, this has been very useful in ensuring that UI requirements are maintained - particularly with images and CSS rules operating as client-side logic states. Wherever a difference can be observed, you want to keep a property of the response to this form of taking rocks. One helpful trick is using an integration testing tool to capture an image of an element or page via the browser JS API and then compare the images over commits. This can also help in dynamically composed image sprites.
@toopkarcher
@toopkarcher Жыл бұрын
Never heard of this before. I'm not a great dev yet but I've just been trying to cover everything through unit tests before trying a refactor but this means I refactor in really small steps (good!) But if the legacy code is too complex for me to think about a good safe way to have it tested enough to break into small steps it aint gonna get refactored 😂
@ProgramArtist
@ProgramArtist Жыл бұрын
Acceptance tests are very close to contract testing, and in some sense contract tests are acceptance tests done twice. I agree that acceptance tests are useful where things should not change. I also think they can be useful in places where things do change but seldom and changes there need to be done more carefully
@markky212
@markky212 Жыл бұрын
Should we delete those tests after refactoring? IMO Yes!. I assume, that during refactor we used TDD
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
Generally yes, I think that these are limited time, tactical, tests. The example I gave of testing visual content is a different use case though, so I'd certainly keep those.
@MMarbleroller
@MMarbleroller Жыл бұрын
What would be a fun video would be to talk about doing the same practice, but with systems where the outputs are not exactly identical every time. For example, imagine something that emits a json stream, but the fields might not always be in the same order every time because they are populated into the data source multi-threaded/asynchronously. Or maybe there is a field where the value is time-based. Hunting for robust comparison methods can be an art.
@jacquelinelee3052
@jacquelinelee3052 Жыл бұрын
The approvaltests library has scrubbers for nondeterministic data like time-based fields, so your results are consistent. You can also order your results in a step before the verify, either in the object toString, or in another custom "printer" function
@kourosh234
@kourosh234 Жыл бұрын
To all so called Software development managers: for God sake, stop doing our job more difficult than it is. A real programmer in administrative position, makes things easier, not more complicated. Even if there is value in some Test or some other technical thing, they aim to develop a product asap, making it available to the customer, nothing else. So I believe the main problem in software development world is that programmers do not decide. Power to those who work, not bullshiters
@RiccardoMerlin
@RiccardoMerlin Жыл бұрын
Would you delete approval tests once refactoring is done and replace them with unit tests? I see a scenario where I want to use TDD to drive the design of the new code so, approval tests are what I call a safe net but then they become redundant once the refactoring is completed.
@ghosty918
@ghosty918 25 күн бұрын
If your approval tests have all been identified as correct behavior during the refactor, they can become unit tests with a simple rename. Then you can keep them for future refactors and feature implementation
@francescomastellone9444
@francescomastellone9444 Жыл бұрын
Hello Dave, I'm a big fan of all your videos and I've learned a lot from you! You make great arguments this time too, of course, but I feel like you could have mentioned the caveats a bit earlier and a bit more strongly... Maybe I'm just bitter after having dealt with a frontend codebase that made extensive use of approval tests, and where each test was named something along the lines of "thing X should work". "...should work" is never a great test naming scheme because it loses out on the documenting power of test names, but in the case of approval testing it's also quite wrong: approval tests don't ever verify that anything "works", they only ever check that some results match their previously approved values. Long story short, the team didn't realize this and eventually, as bugs made their way into the approved results, developers learned to carelessly approve new results, because plenty of results were breaking each time a change was made... Also, coverage thresholds prevented the team from honestly getting rid of tests with bugged approved results. Clearly, extensive use of approval testing wasn't the only thing wrong with that project's testing approach, but it sure left me with a bit of distaste for it. My takeaways from that project were that: - you shouldn't use approval checks as part of integration tests, as their results change too often, and who's got time to review and approve new results? - approval tests don't really work as proper correctness checks - approval tests shouldn't make up the majority of your tests, far from it These days I may use approval tests when I'm "just tinkering" and not doing TDD, but I tend not to commit them to master. Anyway, I don't disagree with your video, I just wanted to share my experience and make others a bit more cautious against these tests :) thanks as always!
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I agree with your limitations, but I thought that quite a lot of the video was saying that. I agree that Approval tests operate in a niche and are not a replacement for TDD or BDD style Acceptance tests, which provide a MUCH stronger assertion that your code does what you want it to, rather than only does what it used to do.
@EmilyBache-tech-coach
@EmilyBache-tech-coach 9 ай бұрын
My follow up video on Approval Testing, this is what Dave Farley still doesn't get: kzfaq.info/get/bejne/oLWlpKiQ1J_WnZs.html
@ponypapa6785
@ponypapa6785 Жыл бұрын
So far, I try to write small tests for legacy systems. Meaning, I read the code, identify the branches and basically send input into the code and record the behavior for each branch. This preserves behavior in a much more fine-grained manner, and it allows a great deal of security when refactoring. Would these still be considered Approval Tests? There is no "writing to disk" and it is a requirement to read and try to understand the code that is already there, which makes it - in my opinion - easier to refactor. Also, unreachable branches are identified more quickly. Also, are these not also a type of regression test and/or exploratory tests?
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I think that these kinds of test are specifically about comparing the "before" and "after" behaviour, that, for me anyway, is what makes them something distinct from other forms of testing. They are a form of regression testing, but are not exploratory tests. Exploratory testing is a human wandering around in the system trying things, "exploring the system". Are your tests "approval tests"? Probably not, because they are testing your guess of the behaviour, you have encoded your guess of what the code is doing in the test, where as the approval test is recording what it actually does. As I say in the video, this second approach has its limitations, and your kind of test is, in general, a more useful kind of test, because it is assert more strongly what the code is meant to do, rather than just confirming that it is still doing what it used to do.
@law1213
@law1213 8 ай бұрын
This makes sense for files, but what about when you have a large complex set of legacy databases, and that is driving the I/O?
@ContinuousDelivery
@ContinuousDelivery 8 ай бұрын
Control the variables! Maybe for approval testing, deploy the system with a DB populated with a fixed, known data set.
@law1213
@law1213 8 ай бұрын
@ContinuousDelivery thanks. I think controlling the DB with a known dataset is going to be the way. This legacy system isn't just legacy its actually some of the worst code I've seen in my 15-year career. 1000 line methods and stored procedures absolutely everywhere. We are refactoring but struggling to make a dent! Unfortunately, it's driven by 15 large and complex databases with many relationships all tightly coupled and no layers or patterns to speak of. 😕
@ContinuousDelivery
@ContinuousDelivery 8 ай бұрын
@@law1213 The only real downside with the approach I mentioned is if the data schema is changing, and if the DB is so horrible that it is hard to reproduce. In the latter case, I'd probably use a snapshot to freeze the DB in time and re-establish that from a clone for each test run. If the DB is in regular, active dev, which is probably unlikely for a system that is scary to change as the one you describe, then you probably need to stabilise the DB changes, so that you can keep up, which takes you into the realms of data migration rather than approval testing 😕
@georgehelyar
@georgehelyar Жыл бұрын
tldr: approval testing - its (slightly) better than nothing. I've seen these kinds of tests before, and they are really brittle. If you are doing pure refactoring, they will sometimes but not always let you know whether you have made any functional changes. If you actually want to make changes, your tests will break, and then you have to try to judge if those breaks are actually correct, but you don't really know because there are no real tests that say what correct means. If you are doing refactoring in a legacy codebase it's because at some point you will want to make actual changes, and at that point what you've got are some pretty useless tests that are just padding the code coverage metrics, and which all break as soon as you change the code, and then you spend more time fixing the tests than making the new change. The one place I have seen them be useful is when a 3rd party UI library breaks the styling on a website, and all you are interested in is working out what has been broken between versions of the library. Updating to a new version of bootstrap or whatever. Thankfully I hardly do any front end work any more though.
@YonoZekenZoid
@YonoZekenZoid 9 ай бұрын
This looks very useful, but I wonder what I should do with dependencies. I am currently working on a codebase that is full of objects that are 3000+ lines long and have at least 20 dependencies each, and I'm not sure how to deal with them. at first I thought "if I don't mock these out, it'll be an integration test", so I mocked them out for a smaller function, with about 6 dependencies, and gave it a try. The tests cover the code, but they just look like bloated unit tests. I didn't even need snapshots for most of them because I could just check that the value returned was one of my mocks. Should I not mock out anything? should I just mock out the database calls? I'm a bit confused tbh and would love it if someone who has worked with approval testing could tell me how they do it thanks!
@ghosty918
@ghosty918 24 күн бұрын
There are 2 ways of doing this: Don't mock anything, test the whole thing. Your code isn't being deployed to handle mock inputs after all Or find a real input and make that your initial approval, then make every outside call part of the approval while mocking the response to be equal to the real inputs response.
@uome2k7
@uome2k7 Жыл бұрын
If you think of the legacy code/system as the unit under test, approval testing is essentially integration testing of the interface to that system. Your refactoring efforts will be providing a new implementation of it. You would then be able to write unit tests on the smaller sections of refactored code. And in that vein, approval test could be the high level tests you are writing when doing TDD for new code as you code the behavior the first time. I think tripping over the "approval test" name might be behind the confusion. Or maybe I am missing how these tests would be different than what you would have when doing BDD/TDD. Great example of showing how you used the coverage report to add test cases to cover all behavior covered by the code. When speaking of testing, the demarcation of unit test, integration test, approval test, contract test, end to end test, etc confuses a lot of people. In the end, the only difference is what you consider the "unit under test" is. Where is the box drawn around what you are testing? Once you decide that, the testing procedure is the same: with certain input, verify the "unit under test" provides the expected output/result. The pyramid builds itself from there rather quickly.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
I disagree that the only difference is the “unit” under test. The approval test is very different in my opinion, because we don’t start with the test, we start with the code. That is why I think it is important to treat them with more caution. TDD & BDD both, by definition, start with the test. So they are more like specifications than tests really. The approval test will assert behaviour that’s wrong, if the code is already wrong. Good for refactoring, rubbish for TDD or Acceptance testing.
@askolotus_prime
@askolotus_prime Жыл бұрын
@@ContinuousDelivery So the Approval Tests are kind of Snapshot tests, so we can use them for any system output or pure UI snapshots.
@ContinuousDelivery
@ContinuousDelivery Жыл бұрын
@@askolotus_prime Yes, I think that "Snapshot Tests" is another synonym for "Approval Tests" I prefer "Characterisation Test" to both, Though "Snapshot" is probably more accurate than "Approval"
@askolotus_prime
@askolotus_prime Жыл бұрын
@@ContinuousDelivery I guess it's the difference of "how it works" for Snapshots and "why we do it" for Acceptance concepts. Another example of declarative vs. imperative if I may say so :)
@JohnWilson-xl3rl
@JohnWilson-xl3rl Жыл бұрын
@@ContinuousDelivery what do you make of this then Dave? - kzfaq.info/get/bejne/n82pqqirtbC2o4E.html
@KnumNegm
@KnumNegm Жыл бұрын
We call it regression testing
Test Driven DESIGN - Step by Step
25:46
Continuous Delivery
Рет қаралды 19 М.
The ESSENTIAL Qualities Of GREAT Development Teams
21:18
Continuous Delivery
Рет қаралды 15 М.
THEY WANTED TO TAKE ALL HIS GOODIES 🍫🥤🍟😂
00:17
OKUNJATA
Рет қаралды 20 МЛН
MEGA BOXES ARE BACK!!!
08:53
Brawl Stars
Рет қаралды 36 МЛН
Became invisible for one day!  #funny #wednesday #memes
00:25
Watch Me
Рет қаралды 53 МЛН
Получилось у Вики?😂 #хабибка
00:14
ХАБИБ
Рет қаралды 7 МЛН
Is SAFe REALLY Safe?
20:00
Continuous Delivery
Рет қаралды 34 М.
Where Agile Gets It Wrong
19:22
Continuous Delivery
Рет қаралды 30 М.
Why I Quit the Scrum Alliance
7:58
The Passionate Programmer
Рет қаралды 10 М.
How to Test Legacy Code (Emily Bache)
1:30:49
Tech Excellence
Рет қаралды 1,1 М.
5 Reasons Your Automated Tests Fail
21:21
Continuous Delivery
Рет қаралды 18 М.
I Bet You’re Overengineering Your Software
19:58
Continuous Delivery
Рет қаралды 23 М.
I REGRET Not Telling Dave Farley THIS about Approval Testing
16:45
Emily Bache
Рет қаралды 3,1 М.
Types Of Technical Debt And How To Manage Them
17:58
Continuous Delivery
Рет қаралды 51 М.
I Asked GPT-4 To Refactor My Legacy Codebase
12:39
Nick Chapsas
Рет қаралды 348 М.
Игровой Комп с Авито за 4500р
1:00
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 2 МЛН