<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yuliya</id>
	<title>CitconWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Yuliya"/>
	<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Special:Contributions/Yuliya"/>
	<updated>2026-04-24T23:06:26Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.11</generator>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Program_Management&amp;diff=16696</id>
		<title>Program Management</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Program_Management&amp;diff=16696"/>
		<updated>2023-02-06T02:31:09Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: Created page with &amp;quot;Context: In lots of situations Program Management isn&amp;#039;t the problem. A lot of times development practices could be better, product selection could be better, etc. But what if...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Context: In lots of situations Program Management isn&amp;#039;t the problem. A lot of times development practices could be better, product selection could be better, etc. But what if developers are on top of it, and the product is good? How does program management look, and can it be a useful and honest activity, i.e. not just scheduling meetings and not operating in a world of &amp;quot;make pretense&amp;quot; plans that no one believes in.&lt;br /&gt;
----&lt;br /&gt;
# TPM - makes it clear how what we are doing comparing to the plan. Forcing conversations about &amp;quot;we were planning for it to take 3 weeks, it&amp;#039;s gonna take 3 months, we need to talk&amp;quot;. Heavily use data for calling bs. &lt;br /&gt;
# Someone created a plan, they talked to engineers, they did their best. Assume they did their best, and life happens, we are learning the plan is wrong. &lt;br /&gt;
# Important not to have the culture of &amp;quot;you guys made a plan, stick with it&amp;quot;, it&amp;#039;s important to have a culture of trust and improvement and collective decision. &lt;br /&gt;
# Program management - project manager across projects. Part of their job is calling bs, and making people talk, and make decisions, changing the time or the scope. &amp;quot;What do we do?&amp;quot; - collective decision and ownership. &lt;br /&gt;
# Does TPM ID the problem or is also part of the solution? Depends on the level of a TPM, starts with program management and asking good questions. With experience - more technical and more solutioning. &lt;br /&gt;
# TPM is in some way a bridge between different disciplines that might not be motivated to drive &lt;br /&gt;
# In another company, TPM is facilitating a recurrent Risk meeting. However, there&amp;#039;s a fortnightly release meeting, which is way too slow. In a lot of implementations, program planning is coupled with releases, and TPMs represent the engineering team to the larger corporate people, and talking x-functional talk (eg. legal, marketing, etc)&lt;br /&gt;
# Still, the role of the TPM to force the visibility&lt;br /&gt;
# Planning&lt;br /&gt;
* If there&amp;#039;s a specific deadline - go back from the deadline, that makes it easier to give realistic estimates&lt;br /&gt;
* Not granular. 5-10 rows, if you need to scroll - too much detail. What needs to be done before what, what can be done in parallel, what has a long lead. &lt;br /&gt;
* Pushing back the rest of the schedule if that&amp;#039;s what happens. Eg. We were planning to get this done in 4 weeks. 2 weeks later we are not half way in. Means it&amp;#039;s gonna take more than 4 weeks. What do we do now&amp;quot;. &lt;br /&gt;
* Being a time domain person, helping everyone&lt;br /&gt;
* If there&amp;#039;s a gahnnt chart, people are gonna manage it. Sometimes it&amp;#039;s helpful to hide it&lt;br /&gt;
* Starting with a plan - take a roadmap without dates. Or with dates. Eg. CEO told the board we&amp;#039;re gonna do something by some date. Good to understand why that date. &lt;br /&gt;
* TPM&amp;#039;s role is asking questions, and following up with &amp;quot;but my original question wasn&amp;#039;t answered&amp;quot;&lt;br /&gt;
* During the planning time, find dependencies and interfaces, which is regardless of time and estimates. Sometimes a way to facilitate a conversation is to remove schedule off the table, and learn about dependencies and interfaces and uncertainties/risks and what needs to come before what. &lt;br /&gt;
* Unicorn or making x-discipline people talk? Unicorn is helpful, also can see if can enable someone to raise into this role. Otherwise, those two people need to talk, and why would they not?&lt;br /&gt;
* Episode 248 from Troubleshooting agile&lt;br /&gt;
* SAFe works for some (rare) occasions. But it can work, and it can work well. It shows what&amp;#039;s ahead of us, what we need to deliver, what are dependencies. Let&amp;#039;s say you have a large 10m project. Break it down into thirds, and see if you can do a third in 3m. That&amp;#039;s how you&amp;#039;ll get information about whether it&amp;#039;s realistic. Prioritize projects using WSJF. Then the team says &amp;quot;we can only do 3 projects, not 5&amp;quot; for example, then you negotiate. WSJF motivates slicing :)&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16683</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16683"/>
		<updated>2023-02-04T23:18:06Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
# [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
# [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;br /&gt;
&lt;br /&gt;
# [[Testing Microservices]]  (Queen&amp;#039;s gambit)&lt;br /&gt;
# [[ADRs, Guardrails and Golden Paths]] (Bojack Horseman)&lt;br /&gt;
&lt;br /&gt;
2pm&lt;br /&gt;
# [[Monitoring driven development]]&lt;br /&gt;
&lt;br /&gt;
3.15pm&lt;br /&gt;
# [[Program Management]]&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Monitoring_driven_development&amp;diff=16682</id>
		<title>Monitoring driven development</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Monitoring_driven_development&amp;diff=16682"/>
		<updated>2023-02-04T22:53:56Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Context: There was this company who was doing all the agile things looking from the outside, however from the inside... &lt;br /&gt;
It was a SaaS product, and they were incredibly immature in terms of running a production system - lots of trivial knowledge, lots of reliance on that one guy. &lt;br /&gt;
We walked through the &amp;quot;marketecture&amp;quot;, and go through every line - &amp;quot;what would happen if this arrow didn&amp;#039;t work?&amp;quot;. And things broke down very quickly. And then I thought about it from TDD background - how would we develop software differently if we were to think about how are we gonna monitor software.&lt;br /&gt;
----&lt;br /&gt;
Conversation notes: &lt;br /&gt;
* Would this replace testing? Maybe some, maybe acceptance testing, but not unit testing or TDD. Or maybe running your acceptance testing in production is a form of monitoring. Not replacing or removing acceptance testing before production, instead running acceptance tests in prod in addition to running them during the pipeline.&lt;br /&gt;
* Why isn&amp;#039;t it done already? Probably cause there&amp;#039;s traditionally separation of concerns between dev team and ops/maintenance team.&lt;br /&gt;
* So, what are the goals? Eg.  unit testing = code works as expected, acceptance testing = product works as expected ==&amp;gt; monitoring = user receives value as expected. &lt;br /&gt;
* Eg. in the origin of DevOps there was an idea of using what&amp;#039;s happening in production to inform next work in development. And not only whether features are broken or not, but also whether features are used&lt;br /&gt;
* Staggered rollout process: if unit tests or system tests fail - roll back the commit. If it works - roll it out to one node, and monitor number of dollars per minute that node produces, compare to other nodes, and if it produces less dollars - do not roll out the change more widely.&lt;br /&gt;
* Bringing telemetry into the conversation, the question comes up - when do you analyze it? And who gets alerted? And for monitoring or alerting - you can&amp;#039;t add that after the fact if the data doesn&amp;#039;t exist, you need to design the system with alerting and monitoring in mind. &lt;br /&gt;
* You can also do something like that with a test customer account and feature flags. The number of feature flags creeps up though, and becomes very challenging to manage. There&amp;#039;s a policy to remove the feature flags that are older than 6m old, and there&amp;#039;s an internal module to manage feature flags and removing flags that&amp;#039;s older than threshold. &lt;br /&gt;
* Sometimes it&amp;#039;s easier to add monitoring to the existing code then finding seams, extracting code, wrapping it in tests, etc. &lt;br /&gt;
* Honeycomb was mentioned several times.&lt;br /&gt;
* Data volume problem shows up, what do you do when you have 1Tb data per day (which happens if you have too much data).&lt;br /&gt;
* Sounds like lots of us are doing it, what&amp;#039;s the problem? - We aren&amp;#039;t discussing monitoring from the start, we aren&amp;#039;t developing with monitoring in mind. Monitoring is supposed to answer a question whether the product is bringing the value it was intended to bring. &lt;br /&gt;
* Done vs Done-done = on one of the kanban boards we expanded it all the way to &amp;quot;1st customer used it&amp;quot;, and we wouldn&amp;#039;t take it off the board until the first customer used it. &lt;br /&gt;
* Product management lots of time thinks about what to build, and might even think about market adoption - but rarely go back when the feature is used, and monitor how well their features perform. Which gives no accountability to product management. This could be a healthy thing to look at when the pressure is always on developers to build faster, it gives accountability to PMs about choosing what to build. Cause if we build trash faster - won&amp;#039;t make a difference. &lt;br /&gt;
* Another approach - check usage by months. How many people used the product at least once month during the year? Two months? How come this feature, that people should use every day, and it&amp;#039;s used one month of the year. So, distinguish between &amp;quot;is it working now&amp;quot; and &amp;quot;is it working over time&amp;quot;. &lt;br /&gt;
* How to get PMs to pay attention? - Ask &amp;quot;How would we know that it works in production? How would we know it makes the impact we hope it makes&amp;quot; when you&amp;#039;re talking about a new feature.&lt;br /&gt;
* &amp;quot;We are gonna A/B test it&amp;quot; - how many interactions you want to see for statistically significant information? How many users do you have? How much improvement do you want to see? Cause in some situation it&amp;#039;s gonna take a year to get that information, and then A/B isn&amp;#039;t a good solution&lt;br /&gt;
* How much a process helps to solve a problem? Thinking about it from Cynefin framework perspective: situation can be simple, complicated, complex, or chaotic - and in some situations process is helpful, and in some - you need mastery of a person with expertize.&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Monitoring_driven_development&amp;diff=16681</id>
		<title>Monitoring driven development</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Monitoring_driven_development&amp;diff=16681"/>
		<updated>2023-02-04T22:44:39Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Context: There was this company who was doing all the agile things looking from the outside, however from the inside... &lt;br /&gt;
It was a SaaS product, and they were incredibly immature in terms of running a production system - lots of trivial knowledge, lots of reliance on that one guy. &lt;br /&gt;
We walked through the &amp;quot;marketecture&amp;quot;, and go through every line - &amp;quot;what would happen if this arrow didn&amp;#039;t work?&amp;quot;. And things broke down very quickly. And then I thought about it from TDD background - how would we develop software differently if we were to think about how are we gonna monitor software.&lt;br /&gt;
----&lt;br /&gt;
Conversation notes: &lt;br /&gt;
* Would this replace testing? Maybe some, maybe acceptance testing, but not unit testing or TDD. Or maybe running your acceptance testing in production is a form of monitoring. Not replacing or removing acceptance testing before production, instead running acceptance tests in prod in addition to running them during the pipeline.&lt;br /&gt;
* Why isn&amp;#039;t it done already? Probably cause there&amp;#039;s traditionally separation of concerns between dev team and ops/maintenance team.&lt;br /&gt;
* So, what are the goals? Eg.  unit testing = code works as expected, acceptance testing = product works as expected ==&amp;gt; monitoring = user receives value as expected. &lt;br /&gt;
* Eg. in the origin of DevOps there was an idea of using what&amp;#039;s happening in production to inform next work in development. And not only whether features are broken or not, but also whether features are used&lt;br /&gt;
* Staggered rollout process: if unit tests or system tests fail - roll back the commit. If it works - roll it out to one node, and monitor number of dollars per minute that node produces, compare to other nodes, and if it produces less dollars - do not roll out the change more widely.&lt;br /&gt;
* Bringing telemetry into the conversation, the question comes up - when do you analyze it? And who gets alerted? And for monitoring or alerting - you can&amp;#039;t add that after the fact if the data doesn&amp;#039;t exist, you need to design the system with alerting and monitoring in mind. &lt;br /&gt;
* You can also do something like that with a test customer account and feature flags. The number of feature flags creeps up though, and becomes very challenging to manage. There&amp;#039;s a policy to remove the feature flags that are older than 6m old, and there&amp;#039;s an internal module to manage feature flags and removing flags that&amp;#039;s older than threshold. &lt;br /&gt;
* Sometimes it&amp;#039;s easier to add monitoring to the existing code then finding seams, extracting code, wrapping it in tests, etc. &lt;br /&gt;
* Honeycomb was mentioned several times.&lt;br /&gt;
* Data volume problem shows up, what do you do when you have 1Tb data per day (which happens if you have too much data).&lt;br /&gt;
* Sounds like lots of us are doing it, what&amp;#039;s the problem? - We aren&amp;#039;t discussing monitoring from the start, we aren&amp;#039;t developing with monitoring in mind. Monitoring is supposed to answer a question whether the product is bringing the value it was intended to bring. &lt;br /&gt;
* Done vs Done-done = on one of the kanban boards we expanded it all the way to &amp;quot;1st customer used it&amp;quot;, and we wouldn&amp;#039;t take it off the board until the first customer used it. &lt;br /&gt;
* Product management lots of time thinks about what to build, and might even think about market adoption - but rarely go back when the feature is used, and monitor how well their features perform. Which gives no accountability to product management. This could be a healthy thing to look at when the pressure is always on developers to build faster, it gives accountability to PMs about choosing what to build. Cause if we build trash faster - won&amp;#039;t make a difference. &lt;br /&gt;
* Another approach - check usage by months. How many people used the product at least once month during the year? Two months? How come this feature, that people should use every day, and it&amp;#039;s used one month of the year. So, distinguish between &amp;quot;is it working now&amp;quot; and &amp;quot;is it working over time&amp;quot;. &lt;br /&gt;
* How to get PMs to pay attention? - Ask &amp;quot;How would we know that it works in production? How would we know it makes the impact we hope it makes&amp;quot; when you&amp;#039;re talking about a new feature.&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Monitoring_driven_development&amp;diff=16680</id>
		<title>Monitoring driven development</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Monitoring_driven_development&amp;diff=16680"/>
		<updated>2023-02-04T22:08:52Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: Created page with &amp;quot;Context: There was this company who was doing all the agile things looking from the outside, however from the inside...  It was a SaaS product, and they were incredibly immatu...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Context: There was this company who was doing all the agile things looking from the outside, however from the inside... &lt;br /&gt;
It was a SaaS product, and they were incredibly immature in terms of running a production system - lots of trivial knowledge, lots of reliance on that one guy. &lt;br /&gt;
We walked through the &amp;quot;marketecture&amp;quot;, and go through every line - &amp;quot;what would happen if this arrow didn&amp;#039;t work?&amp;quot;. And things broke down very quickly. And then I thought about it from TDD background - how would we develop software differently if we were to think about how are we gonna monitor software.&lt;br /&gt;
----&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16679</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16679"/>
		<updated>2023-02-04T22:03:40Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
# [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
# [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;br /&gt;
&lt;br /&gt;
# [[Testing Microservices]]  (Queen&amp;#039;s gambit)&lt;br /&gt;
# [[ADRs, Guardrails and Golden Paths]] (Bojack Horseman)&lt;br /&gt;
&lt;br /&gt;
2pm&lt;br /&gt;
# [[Monitoring driven development]]&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16676</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16676"/>
		<updated>2023-02-04T20:14:57Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Conversation notes&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonality of contracts. But Netflix has fantastic monitoring practices&lt;br /&gt;
* Challenge - in a micro-services environment, how do you catch issues before production if all tests were fine for individual environments?&lt;br /&gt;
* Heavy Monitoring &lt;br /&gt;
* Learning the process for a year before automating it was helpful (some automations are not needed)&lt;br /&gt;
* Disney provides client libraries. Versioning - providing min and max. Netflix - started with that, had to stop cause there was too many different libraries.&lt;br /&gt;
* Sometimes new practices require change in culture. &lt;br /&gt;
* Deploying across different geographic zones has to be staggered - deploy to EU, wait for 6 hours, confirm it works, deploy to next region&lt;br /&gt;
* Microservices could become distributed monolith, one reason - relying on shared datastore.  &lt;br /&gt;
* At Netflix, rather than trying to find issues in test, and trying to reproduce production (too expensive), Netflix invested in monitoring and expediting the fixing, and implementing &amp;quot;roll back&amp;quot; and &amp;quot;roll forward&amp;quot;. &lt;br /&gt;
* In context of &amp;quot;roll back&amp;quot;, do people use SLAs or KPIs? SLAs are helpful to make data driven decisions to invest into improving certain services, but also can become a stick for management to use in a toxic way. &lt;br /&gt;
* Is there a gold standard for data around service performance? Alternative way - find outliers comparing to average performance of similar services. &lt;br /&gt;
* In the notion of error-budget, in some cases you&amp;#039;d halt deployment if service isn&amp;#039;t complying with specific metrics. You could override it, it&amp;#039;s manual effort, and with timezones you sometimes need a duplicate team that you need. Netflix decided against error budgets not to introduce the additional burden of bureaucracy around it. &lt;br /&gt;
* When we need teams to update their services, &amp;quot;should&amp;quot; didn&amp;#039;t help - teams need to do a lot of things they &amp;quot;need&amp;quot; to do. What worked for some folks - include this kind of work into a plan, and set up expectation with the company and leadership that engineering will spend 70% of time on features, and 30% on all kinds of &amp;quot;stuff comes up&amp;quot;. Still takes time, but works. But the culture change needs to come from the top down. &lt;br /&gt;
* So, what do you do when stuff still goes down in prod? There&amp;#039;s an incident commander / coordinator / collator - finding who needs to roll back / roll forward whatever needs to be done, and then coordinating a post mortem, which everyone can join. Action items are separated into short-term, med-term, and long-term.&lt;br /&gt;
* COE (correction of error) can become a cause of tension, who&amp;#039;s responsible for fixing things? How do you ensure teams are motivated and enabled to spend time fixing their things? &lt;br /&gt;
* A lot of engineers understand technical KPIs but not business KPIs. Helping developers see how their work affects actual business metrics proved helpful. And severity of the incident must depend on the business impact, not on the internal politics (i.e. trying to hide incidents under &amp;quot;p2&amp;quot; to not get on CEO&amp;#039;s radar)&lt;br /&gt;
* &amp;quot;If it&amp;#039;s painful - do it more often&amp;quot; principle&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16675</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16675"/>
		<updated>2023-02-04T20:14:42Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Conversation notes&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonality of contracts. But Netflix has fantastic monitoring practices&lt;br /&gt;
* Challenge - in a micro-services environment, how do you catch issues before production if all tests were fine for individual environments?&lt;br /&gt;
* Heavy Monitoring &lt;br /&gt;
* Learning the process for a year before automating it was helpful (some automations are not needed)&lt;br /&gt;
* Disney provides client libraries. Versioning - providing min and max. Netflix - started with that, had to stop cause there was too many different libraries.&lt;br /&gt;
* Sometimes new practices require change in culture. &lt;br /&gt;
* Deploying across different geographic zones has to be staggered - deploy to EU, wait for 6 hours, confirm it works, deploy to next region&lt;br /&gt;
* Microservices could become distributed monolith, one reason - relying on shared datastore.  &lt;br /&gt;
* At Netflix, rather than trying to find issues in test, and trying to reproduce production (too expensive), Netflix invested in monitoring and expediting the fixing, and implementing &amp;quot;roll back&amp;quot; and &amp;quot;roll forward&amp;quot;. &lt;br /&gt;
* In context of &amp;quot;roll back&amp;quot;, do people use SLAs or KPIs? SLAs are helpful to make data driven decisions to invest into improving certain services, but also can become a stick for management to use in a toxic way. &lt;br /&gt;
* Is there a gold standard for data around service performance? Alternative way - find outliers comparing to average performance of similar services. &lt;br /&gt;
* In the notion of error-budget, in some cases you&amp;#039;d halt deployment if service isn&amp;#039;t complying with specific metrics. You could override it, it&amp;#039;s manual effort, and with timezones you sometimes need a duplicate team that you need. Netflix decided against error budgets not to introduce the additional burden of bureaucracy around it. &lt;br /&gt;
* When we need teams to update their services, &amp;quot;should&amp;quot; didn&amp;#039;t help - teams need to do a lot of things they &amp;quot;need&amp;quot; to do. What worked for some folks - include this kind of work into a plan, and set up expectation with the company and leadership that engineering will spend 70% of time on features, and 30% on all kinds of &amp;quot;stuff comes up&amp;quot;. Still takes time, but works. But the culture change needs to come from the top down. &lt;br /&gt;
* So, what do you do when stuff still goes down in prod? There&amp;#039;s an incident commander / coordinator / collator - finding who needs to roll back / roll forward whatever needs to be done, and then coordinating a post mortem, which everyone can join. Action items are separated into short-term, med-term, and long-term.&lt;br /&gt;
* &amp;quot;If it&amp;#039;s painful - do it more often&amp;quot; &lt;br /&gt;
* COE (correction of error) can become a cause of tension, who&amp;#039;s responsible for fixing things? How do you ensure teams are motivated and enabled to spend time fixing their things? &lt;br /&gt;
* A lot of engineers understand technical KPIs but not business KPIs. Helping developers see how their work affects actual business metrics proved helpful. And severity of the incident must depend on the business impact, not on the internal politics (i.e. trying to hide incidents under &amp;quot;p2&amp;quot; to not get on CEO&amp;#039;s radar)&lt;br /&gt;
* &amp;quot;If it&amp;#039;s painful - do it more often&amp;quot; principle&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16674</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16674"/>
		<updated>2023-02-04T20:12:57Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Conversation notes&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonality of contracts. But Netflix has fantastic monitoring practices&lt;br /&gt;
* Challenge - in a micro-services environment, how do you catch issues before production if all tests were fine for individual environments?&lt;br /&gt;
* Heavy Monitoring &lt;br /&gt;
* Learning the process for a year before automating it was helpful (some automations are not needed)&lt;br /&gt;
* Disney provides client libraries. Versioning - providing min and max. Netflix - started with that, had to stop cause there was too many different libraries.&lt;br /&gt;
* Sometimes new practices require change in culture. &lt;br /&gt;
* Deploying across different geographic zones has to be staggered - deploy to EU, wait for 6 hours, confirm it works, deploy to next region&lt;br /&gt;
* Microservices could become distributed monolith, one reason - relying on shared datastore.  &lt;br /&gt;
* At Netflix, rather than trying to find issues in test, and trying to reproduce production (too expensive), Netflix invested in monitoring and expediting the fixing, and implementing &amp;quot;roll back&amp;quot; and &amp;quot;roll forward&amp;quot;. &lt;br /&gt;
* In context of &amp;quot;roll back&amp;quot;, do people use SLAs or KPIs? SLAs are helpful to make data driven decisions to invest into improving certain services, but also can become a stick for management to use in a toxic way. &lt;br /&gt;
* Is there a gold standard for data around service performance? Alternative way - find outliers comparing to average performance of similar services. &lt;br /&gt;
* In the notion of error-budget, in some cases you&amp;#039;d halt deployment if service isn&amp;#039;t complying with specific metrics. You could override it, it&amp;#039;s manual effort, and with timezones you sometimes need a duplicate team that you need. Netflix decided against error budgets not to introduce the additional burden of bureaucracy around it. &lt;br /&gt;
* When we need teams to update their services, &amp;quot;should&amp;quot; didn&amp;#039;t help - teams need to do a lot of things they &amp;quot;need&amp;quot; to do. What worked for some folks - include this kind of work into a plan, and set up expectation with the company and leadership that engineering will spend 70% of time on features, and 30% on all kinds of &amp;quot;stuff comes up&amp;quot;. Still takes time, but works. But the culture change needs to come from the top down. &lt;br /&gt;
* So, what do you do when stuff still goes down in prod? There&amp;#039;s an incident commander / coordinator / collator - finding who needs to roll back / roll forward whatever needs to be done, and then coordinating a post mortem, which everyone can join. Action items are separated into short-term, med-term, and long-term.&lt;br /&gt;
* &amp;quot;If it&amp;#039;s painful - do it more often&amp;quot; &lt;br /&gt;
* COE (correction of error) can become a cause of tension, who&amp;#039;s responsible for fixing things? How do you ensure teams are motivated and enabled to spend time fixing their things? &lt;br /&gt;
* A lot of engineers understand technical KPIs but not business KPIs. Helping developers see how their work affects actual business metrics proved helpful. And severity of the incident must depend on the business impact, not on the internal politics (i.e. trying to hide incidents under &amp;quot;p2&amp;quot; to not get on CEO&amp;#039;s radar)&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16673</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16673"/>
		<updated>2023-02-04T20:11:58Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Conversation notes&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonality of contracts. But Netflix has fantastic monitoring practices&lt;br /&gt;
* Challenge - in a micro-services environment, how do you catch issues before production if all tests were fine for individual environments?&lt;br /&gt;
* Heavy Monitoring &lt;br /&gt;
* Learning the process for a year before automating it was helpful (some automations are not needed)&lt;br /&gt;
* Disney provides client libraries. Versioning - providing min and max. Netflix - started with that, had to stop cause there was too many different libraries.&lt;br /&gt;
* Sometimes new practices require change in culture. &lt;br /&gt;
* Deploying across different geographic zones has to be staggered - deploy to EU, wait for 6 hours, confirm it works, deploy to next region&lt;br /&gt;
* Microservices could become distributed monolith, one reason - relying on shared datastore.  &lt;br /&gt;
* At Netflix, rather than trying to find issues in test, and trying to reproduce production (too expensive), Netflix invested in monitoring and expediting the fixing, and implementing &amp;quot;roll back&amp;quot; and &amp;quot;roll forward&amp;quot;. &lt;br /&gt;
* In context of &amp;quot;roll back&amp;quot;, do people use SLAs or KPIs? SLAs are helpful to make data driven decisions to invest into improving certain services, but also can become a stick for management to use in a toxic way. &lt;br /&gt;
* Is there a gold standard for data around service performance? Alternative way - find outliers comparing to average performance of similar services. &lt;br /&gt;
* In the notion of error-budget, in some cases you&amp;#039;d halt deployment if service isn&amp;#039;t complying with specific metrics. You could override it, it&amp;#039;s manual effort, and with timezones you sometimes need a duplicate team that you need. Netflix decided against error budgets not to introduce the additional burden of bureaucracy around it. &lt;br /&gt;
* When we need teams to update their services, &amp;quot;should&amp;quot; didn&amp;#039;t help - teams need to do a lot of things they &amp;quot;need&amp;quot; to do. What worked for some folks - include this kind of work into a plan, and set up expectation with the company and leadership that engineering will spend 70% of time on features, and 30% on all kinds of &amp;quot;stuff comes up&amp;quot;. Still takes time, but works. But the culture change needs to come from the top down. &lt;br /&gt;
* So, what do you do when stuff still goes down in prod? There&amp;#039;s an incident commander / coordinator / collator - finding who needs to roll back / roll forward whatever needs to be done, and then coordinating a post mortem, which everyone can join. Action items are separated into short-term, med-term, and long-term.&lt;br /&gt;
* COE (correction of error) can become a cause of tension, who&amp;#039;s responsible for fixing things? How do you ensure teams are motivated and enabled to spend time fixing their things? &lt;br /&gt;
* A lot of engineers understand technical KPIs but not business KPIs. Helping developers see how their work affects actual business metrics proved helpful. And severity of the incident must depend on the business impact, not on the internal politics (i.e. trying to hide incidents under &amp;quot;p2&amp;quot; to not get on CEO&amp;#039;s radar)&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16672</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16672"/>
		<updated>2023-02-04T20:11:20Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Conversation points&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonality of contracts. But Netflix has fantastic monitoring practices&lt;br /&gt;
* Challenge - in a micro-services environment, how do you catch issues before production if all tests were fine for individual environments?&lt;br /&gt;
* Heavy Monitoring &lt;br /&gt;
* Learning the process for a year before automating it was helpful (some automations are not needed)&lt;br /&gt;
* Disney provides client libraries. Versioning - providing min and max. Netflix - started with that, had to stop cause there was too many different libraries.&lt;br /&gt;
* Sometimes new practices require change in culture. &lt;br /&gt;
* Deploying across different geographic zones has to be staggered - deploy to EU, wait for 6 hours, confirm it works, deploy to next region&lt;br /&gt;
* Microservices could become distributed monolith, one reason - relying on shared datastore.  &lt;br /&gt;
* At Netflix, rather than trying to find issues in test, and trying to reproduce production (too expensive), Netflix invested in monitoring and expediting the fixing, and implementing &amp;quot;roll back&amp;quot; and &amp;quot;roll forward&amp;quot;. &lt;br /&gt;
* In context of &amp;quot;roll back&amp;quot;, do people use SLAs or KPIs? SLAs are helpful to make data driven decisions to invest into improving certain services, but also can become a stick for management to use in a toxic way. &lt;br /&gt;
* Is there a gold standard for data around service performance? Alternative way - find outliers comparing to average performance of similar services. &lt;br /&gt;
* In the notion of error-budget, in some cases you&amp;#039;d halt deployment if service isn&amp;#039;t complying with specific metrics. You could override it, it&amp;#039;s manual effort, and with timezones you sometimes need a duplicate team that you need. Netflix decided against error budgets not to introduce the additional burden of bureaucracy around it. &lt;br /&gt;
* When we need teams to update their services, &amp;quot;should&amp;quot; didn&amp;#039;t help - teams need to do a lot of things they &amp;quot;need&amp;quot; to do. What worked for some folks - include this kind of work into a plan, and set up expectation with the company and leadership that engineering will spend 70% of time on features, and 30% on all kinds of &amp;quot;stuff comes up&amp;quot;. Still takes time, but works. But the culture change needs to come from the top down. &lt;br /&gt;
* So, what do you do when stuff still goes down in prod? There&amp;#039;s an incident commander / coordinator / collator - finding who needs to roll back / roll forward whatever needs to be done, and then coordinating a post mortem, which everyone can join. Action items are separated into short-term, med-term, and long-term.&lt;br /&gt;
* COE (correction of error) can become a cause of tension, who&amp;#039;s responsible for fixing things? How do you ensure teams are motivated and enabled to spend time fixing their things? &lt;br /&gt;
* A lot of engineers understand technical KPIs but not business KPIs. Helping developers see how their work affects actual business metrics proved helpful. And severity of the incident must depend on the business impact, not on the internal politics (i.e. trying to hide incidents under &amp;quot;p2&amp;quot; to not get on CEO&amp;#039;s radar)&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16671</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16671"/>
		<updated>2023-02-04T20:05:20Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Conversation points&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonality of contracts. But Netflix has fantastic monitoring practices&lt;br /&gt;
* Challenge - in a micro-services environment, how do you catch issues before production if all tests were fine for individual environments?&lt;br /&gt;
* Heavy Monitoring &lt;br /&gt;
* Learning the process for a year before automating it was helpful (some automations are not needed)&lt;br /&gt;
* Disney provides client libraries. Versioning - providing min and max. Netflix - started with that, had to stop cause there was too many different libraries.&lt;br /&gt;
* Sometimes new practices require change in culture. &lt;br /&gt;
* Deploying across different geographic zones has to be staggered - deploy to EU, wait for 6 hours, confirm it works, deploy to next region&lt;br /&gt;
* Microservices could become distributed monolith, one reason - relying on shared datastore.  &lt;br /&gt;
* At Netflix, rather than trying to find issues in test, and trying to reproduce production (too expensive), Netflix invested in monitoring and expediting the fixing, and implementing &amp;quot;roll back&amp;quot; and &amp;quot;roll forward&amp;quot;. &lt;br /&gt;
* In context of &amp;quot;roll back&amp;quot;, do people use SLAs or KPIs? SLAs are helpful to make data driven decisions to invest into improving certain services, but also can become a stick for management to use in a toxic way. &lt;br /&gt;
* Is there a gold standard for data around service performance? Alternative way - find outliers comparing to average performance of similar services. &lt;br /&gt;
* In the notion of error-budget, in some cases you&amp;#039;d halt deployment if service isn&amp;#039;t complying with specific metrics. You could override it, it&amp;#039;s manual effort, and with timezones you sometimes need a duplicate team that you need. Netflix decided against error budgets not to introduce the additional burden of bureaucracy around it. &lt;br /&gt;
* When we need teams to update their services, &amp;quot;should&amp;quot; didn&amp;#039;t help - teams need to do a lot of things they &amp;quot;need&amp;quot; to do. What worked for some folks - include this kind of work into a plan, and set up expectation with the company and leadership that engineering will spend 70% of time on features, and 30% on all kinds of &amp;quot;stuff comes up&amp;quot;. Still takes time, but works. But the culture change needs to come from the top down. &lt;br /&gt;
* So, what do you do when stuff still goes down in prod? There&amp;#039;s an incident commander / coordinator / collator - finding who needs to roll back / roll forward whatever needs to be done, and then coordinating a post mortem, which everyone can join. Action items are separated into short-term, med-term, and long-term.&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16668</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16668"/>
		<updated>2023-02-04T19:38:09Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tools&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonality of contracts. But Netflix has fantastic monitoring practices&lt;br /&gt;
* Challenge - in a micro-services environment, how do you catch issues before production if all tests were fine for individual environments?&lt;br /&gt;
* Heavy Monitoring &lt;br /&gt;
* Learning the process for a year before automating it was helpful (some automations are not needed)&lt;br /&gt;
* Disney provides client libraries. Versioning - providing min and max. Netflix - started with that, had to stop cause there was too many different libraries.&lt;br /&gt;
* Sometimes new practices require change in culture. &lt;br /&gt;
* Deploying across different geographic zones has to be staggered - deploy to EU, wait for 6 hours, confirm it works, deploy to next region&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16667</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16667"/>
		<updated>2023-02-04T19:26:45Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
# [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
# [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;br /&gt;
&lt;br /&gt;
# [[Testing Microservices]]  (Queen&amp;#039;s gambit)&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16666</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16666"/>
		<updated>2023-02-04T19:26:35Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
# [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
# [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;br /&gt;
&lt;br /&gt;
# [[Testing Microservices (Queen&amp;#039;s gambit)]]&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16665</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16665"/>
		<updated>2023-02-04T19:25:33Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tools&lt;br /&gt;
* Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
* NOC (Network Ops Center) -&lt;br /&gt;
* New bank - common stack, clojure, AWS&lt;br /&gt;
* Netflix also started with same stack, but it diverged assuming commonnality of contracts. But Netflix has fantastic monitoring practices&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16664</id>
		<title>Testing Microservices</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Testing_Microservices&amp;diff=16664"/>
		<updated>2023-02-04T19:22:47Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: Created page with &amp;quot;Tools -  - Splunk stack and hashi wrapper for monitoring - NOC (Network Ops Center) -&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tools&lt;br /&gt;
- &lt;br /&gt;
- Splunk stack and hashi wrapper for monitoring&lt;br /&gt;
- NOC (Network Ops Center) -&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16663</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16663"/>
		<updated>2023-02-04T19:20:55Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
# [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
# [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;br /&gt;
&lt;br /&gt;
# [[Testing Microservices]]&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16662</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16662"/>
		<updated>2023-02-04T19:20:44Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
# [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
&lt;br /&gt;
# [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;br /&gt;
&lt;br /&gt;
# [[Testing Microservices]]&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16661</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16661"/>
		<updated>2023-02-04T19:20:30Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
1. [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
&lt;br /&gt;
2. [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;br /&gt;
&lt;br /&gt;
# [[Testing Microservices]]&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16659</id>
		<title>Beyond XP</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16659"/>
		<updated>2023-02-04T18:56:56Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;quot;Post-XP something&amp;quot; (will be replaced by the real name)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Attendees&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
- Jesse, Jeffrey, Nat, Yuliya, Andreas&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Discussion&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
This was a discussion about how Cloud Foundry practices changed since VMware acquisition and pandemic&lt;br /&gt;
What happened?&lt;br /&gt;
# Moved product managers and designers away (engineers could take over the backlog, but what about talking with customers?)&lt;br /&gt;
# Changed what managers did (80% sw development -&amp;gt; full time manager)&lt;br /&gt;
# Not using any co-location enabled practices (eg. breakfast, synchronization)&lt;br /&gt;
# Emotional anguish of losing something special&lt;br /&gt;
&lt;br /&gt;
And maybe pivotal wasn&amp;#039;t perfectly solving some things in the first place. Eg. Pivotal PMs operating on tactical levels were operating more as business analysts, and couldn&amp;#039;t focus on product vision and strategic work.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Post acquisition, &lt;br /&gt;
# the team was 16 people. &lt;br /&gt;
# Decision to stop looking at pivotal tracker, because pivotal tracker assumed pivotal process and we couldn&amp;#039;t do that with the new team. The team switched to basecamp&lt;br /&gt;
# The team decided to try &amp;quot;Shape up&amp;quot; methodology for 6 months. 6 week iterations with &amp;quot;bets&amp;quot; (shaped solutions for chunky problems) + 2 week off for adjusting, evaluating, planning. It was very successful for prioritizing and getting done work of the &amp;quot;Shapeup&amp;quot; size&lt;br /&gt;
# It became much easier to advocate for your career advancement cause collective effort and accomplishment transitioned to personal accomplishment. Which freed up a lot of management time, however it also meant that a lot of &amp;quot;non-glamorous work&amp;quot; wasn&amp;#039;t getting done&lt;br /&gt;
# Added a notion of &amp;quot;home team&amp;quot;, everyone defaults to it. However, when there&amp;#039;s a &amp;quot;pitch&amp;quot; to be worked on - 3-4 people become a &amp;quot;crew&amp;quot;, and step away from the home team for 6 weeks.&lt;br /&gt;
# There&amp;#039;s also a concept of &amp;quot;tribute&amp;quot; - people volunteering to fill in roles that were missing &lt;br /&gt;
# After a year with the &amp;quot;home team&amp;quot;, the moved away from leadership writing pitches and leadership making allocation decisions to people starting to do writing &amp;quot;pitches&amp;quot; and doing self-allocation to different &amp;quot;bets&amp;quot;&lt;br /&gt;
# Once the team was smaller, and working on specific problems, the team ended up going back to Pivotal Tracker sometime, and after a break it was actually easier to use it again and for a new problem the team was trying to solve&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16658</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16658"/>
		<updated>2023-02-04T18:51:39Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
&lt;br /&gt;
1. [[Beyond XP]] (Story of Pivotal Cloud Foundry post VMware acquisition)&lt;br /&gt;
&lt;br /&gt;
2. [[Impact Driven Testing and Gap Analysis]]&lt;br /&gt;
&lt;br /&gt;
11am&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16657</id>
		<title>Beyond XP</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16657"/>
		<updated>2023-02-04T18:47:49Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;quot;Post-XP something&amp;quot; (will be replaced by the real name)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Attendees&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
- Jesse, Jeffrey, Nat, Yuliya, Andreas&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Discussion&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
This was a discussion about how Cloud Foundry practices changed since VMware acquisition and pandemic&lt;br /&gt;
What happened?&lt;br /&gt;
# Moved product managers and designers away (engineers could take over the backlog, but what about talking with customers?)&lt;br /&gt;
# Changed what managers did (80% sw development -&amp;gt; full time manager)&lt;br /&gt;
# Not using any co-location enabled practices (eg. breakfast, synchronization)&lt;br /&gt;
# Emotional anguish of losing something special&lt;br /&gt;
&lt;br /&gt;
And maybe pivotal wasn&amp;#039;t perfectly solving some things in the first place. Eg. Pivotal PMs operating on tactical levels were operating more as business analysts, and couldn&amp;#039;t focus on product vision and strategic work.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
Post acquisition, &lt;br /&gt;
# the team was 16 people. &lt;br /&gt;
# Decision to stop looking at pivotal tracker, because pivotal tracker assumed pivotal process and we couldn&amp;#039;t do that with the new team. The team switched to basecamp&lt;br /&gt;
# The team decided to try &amp;quot;Shape up&amp;quot; methodology for 6 months. 6 week iterations with &amp;quot;bets&amp;quot; (shaped solutions for chunky problems) + 2 week off for adjusting, evaluating, planning. It was very successful for prioritizing and getting done work of the &amp;quot;Shapeup&amp;quot; size&lt;br /&gt;
# It became much easier to advocate for your career advancement cause collective effort and accomplishment transitioned to personal accomplishment. Which freed up a lot of management time, however it also meant that a lot of &amp;quot;non-glamorous work&amp;quot; wasn&amp;#039;t getting done&lt;br /&gt;
# The team ended up going back to Pivotal Tracker sometime later in order to solve new problems, and after a break it was actually easier to use it again and for a new problem the team was trying to solve&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16654</id>
		<title>Beyond XP</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16654"/>
		<updated>2023-02-04T18:38:21Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;quot;Post-XP something&amp;quot; (will be replaced by the real name)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Attendees&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
- Jesse, Jeffrey, Nat, Yuliya&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Discussion&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Discussed how Cloud Foundry practices changed since VMware acquisition and pandemic&lt;br /&gt;
&lt;br /&gt;
1. Moved product managers and designers away (engineers could take over the backlog, but what about talking with customers?)&lt;br /&gt;
2. Changed what managers did (80% sw development -&amp;gt; full time manager)&lt;br /&gt;
3. Not using any co-location enabled practices (eg. breakfast, synchronization)&lt;br /&gt;
&lt;br /&gt;
Alternative approach&lt;br /&gt;
1. Empower engineers to groom and own the backlog&lt;br /&gt;
&lt;br /&gt;
Different teams have different problems: sometimes when the question is &amp;quot;who&amp;#039;s gonna groom the backlog&amp;quot; people answer &amp;quot;not me&amp;quot;, and sometimes there&amp;#039;s 2 engineers and 5 people who are trying to tell them what to do, and solutions will be different&lt;br /&gt;
&lt;br /&gt;
Pivotal PMs operating on tactical levels were operating more as business analysts, and couldn&amp;#039;t focus on product vision and strategic work.&lt;br /&gt;
&lt;br /&gt;
Post acquisition, the team was 16 people. Decision to stop looking at pivotal tracker, because pivotal tracker assumed pivotal process and we couldn&amp;#039;t do that with the new team. Plus there was a lot of emotional reasons why it was hard to continue trying to do pivotal process despite the acquisition.&lt;br /&gt;
&lt;br /&gt;
Switched to basecamp, 6 week iterations with &amp;quot;bets&amp;quot; (shaped solutions for chunky problems) + 2 week off for adjusting, evaluating, planning. The methodology is called &amp;quot;Shape up&amp;quot;. The team decided to try this methodology for 6 months.&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16653</id>
		<title>Beyond XP</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16653"/>
		<updated>2023-02-04T18:22:44Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;quot;Post-XP something&amp;quot; (will be replaced by the real name)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Attendees&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
- Jesse, Jeffrey, Nat, Yuliya&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Discussion&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Discussed how Cloud Foundry practices changed since VMware acquisition and pandemic&lt;br /&gt;
&lt;br /&gt;
1. Moved product managers and designers away&lt;br /&gt;
2. Changed what managers did (80% sw development -&amp;gt; full time manager)&lt;br /&gt;
3. Not using any co-location enabled practices (eg. breakfast, synchronization)&lt;br /&gt;
&lt;br /&gt;
Alternative approach&lt;br /&gt;
1. Empower engineers to groom and own the backlog&lt;br /&gt;
&lt;br /&gt;
Different teams have different problems: sometimes when the question is &amp;quot;who&amp;#039;s gonna groom the backlog&amp;quot; people answer &amp;quot;not me&amp;quot;, and sometimes there&amp;#039;s 2 engineers and 5 people who are trying to tell them what to do, and solutions will be different&lt;br /&gt;
&lt;br /&gt;
Pivotal PMs operating on tactical levels were operating more as business analysts, and couldn&amp;#039;t focus on product vision and strategic work.&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16652</id>
		<title>Beyond XP</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Beyond_XP&amp;diff=16652"/>
		<updated>2023-02-04T18:22:03Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: Created page with &amp;quot;&amp;quot;Post-XP something&amp;quot; (will be replaced by the real name)  Attendees - Jesse, Jeffrey, Nat, Yuliya  Discussion - Discussed how Cloud Foundry practices changed since VMware acqui...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;quot;Post-XP something&amp;quot; (will be replaced by the real name)&lt;br /&gt;
&lt;br /&gt;
Attendees&lt;br /&gt;
- Jesse, Jeffrey, Nat, Yuliya&lt;br /&gt;
&lt;br /&gt;
Discussion&lt;br /&gt;
- Discussed how Cloud Foundry practices changed since VMware acquisition and pandemic&lt;br /&gt;
1. Moved product managers and designers away&lt;br /&gt;
2. Changed what managers did (80% sw development -&amp;gt; full time manager)&lt;br /&gt;
3. Not using any co-location enabled practices (eg. breakfast, synchronization)&lt;br /&gt;
&lt;br /&gt;
Alternative approach&lt;br /&gt;
1. Empower engineers to groom and own the backlog&lt;br /&gt;
&lt;br /&gt;
Different teams have different problems: sometimes when the question is &amp;quot;who&amp;#039;s gonna groom the backlog&amp;quot; people answer &amp;quot;not me&amp;quot;, and sometimes there&amp;#039;s 2 engineers and 5 people who are trying to tell them what to do, and solutions will be different&lt;br /&gt;
&lt;br /&gt;
Pivotal PMs operating on tactical levels were operating more as business analysts, and couldn&amp;#039;t focus on product vision and strategic work.&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16651</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16651"/>
		<updated>2023-02-04T18:11:27Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
- [[Beyond XP]]&lt;br /&gt;
- &lt;br /&gt;
&lt;br /&gt;
11am&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16650</id>
		<title>CITCONNA2023Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2023Sessions&amp;diff=16650"/>
		<updated>2023-02-04T18:10:47Z</updated>

		<summary type="html">&lt;p&gt;Yuliya: Created page with &amp;quot;10am - Beyond XP -   11am&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;10am&lt;br /&gt;
- Beyond XP&lt;br /&gt;
- &lt;br /&gt;
&lt;br /&gt;
11am&lt;/div&gt;</summary>
		<author><name>Yuliya</name></author>
	</entry>
</feed>