Investigating a Gradual Drop in Conversion Rate: A Case Study for a Fashion E-commerce Giant- Part II
Unveiling Insights and Taking Action
Hey there! Piyush here, and I'm thrilled to bring you another edition of my weekly analytics newsletter. Let's dive right in!
In the first part of our case study, we explored the initial stages of our investigation into the gradual drop in conversion rate for a fashion e-commerce giant. We witnessed how Rohit, our diligent product analyst, leveraged cohort and funnel analysis to gain valuable insights into the user journey. Now, we dive deeper into the next phase of our investigation, where we uncover conclusive findings and take action to address the identified issues.
To revisit the first part of this blog, please refer to the link provided.
As we left off, Rohit had discovered that the drop-off in conversion rate was most prominent for users who reached the product page through search on the app. This insight raised suspicions about potential issues with the app's search functionality or the relevance of search results.
The Analytics-Data Science Collaboration
In the vibrant and art-adorned office of the fashion e-commerce giant, Rohit crossed paths with Shreyash, the seasoned data scientist leading the search team.
They had crossed paths on several occasions but had never engaged in a detailed conversation. Little did they know that their encounter would mark the beginning of a remarkable collaboration.
As Rohit approached Shreyash's desk, he noticed the walls adorned with intricate graphs and equations, a testament to Shreyash's expertise in data science. With a slight hint of nervousness, Rohit cleared his throat and initiated the conversation.
Rohit: "Hi Shreyash, I hope I'm not interrupting anything important. I've been diving deep into our app's user journey, specifically focusing on the drop in conversion rate for users who reach the product page through search."
Shreyash, known for his calm demeanour, looked up from his laptop screen and offered a friendly smile.
Shreyash: "Not at all, Rohit. I'm always open to discussions that can help us uncover insights and improve our systems. Please, have a seat. What have you discovered so far?"
Rohit took a seat, a sense of anticipation building within him. He began sharing his findings, outlining the peculiar drop-off in conversion rate and the potential issues with the search functionality or relevance of search results.
Rohit: "As I analysed the data, it became apparent that users who relied on the search feature were experiencing a significant drop in their conversion rate. This led me to suspect that there might be some issues with our search algorithm or the relevance of the search results."
Shreyash listened attentively, intrigued by Rohit's observations. He leaned back in his chair, ready to contribute his expertise.
Shreyash: "I appreciate your thorough analysis, Rohit. However, I must inform you that we recently deployed a new search model aimed at enhancing the user experience. The initial feedback and performance indicators have been promising. In fact, let me show you a few key search metrics that demonstrate the positive impact of our latest implementation."
Shreyash navigated through his meticulously organised dashboards and presented a set of insightful visuals, showcasing the improvement in search metrics.
Analysing Key Search Metrics
Shreyash proceeded to share the key search metrics that shed light on the performance of the search results. With a confident smile, he presented a set of insightful visuals, showcasing the following metrics:
Click-through Rate (CTR): This metric measures the percentage of users who click on the search results out of the total number of users who view them. Shreyash displayed a graph depicting a significant increase in the CTR for users on the new search model compared to users on the old model. This indicated that the new search algorithm was effectively driving higher engagement and click-through rates.
Conversion Rate: The conversion rate metric measures the percentage of users who complete a desired action, such as making a purchase or signing up for a service, out of the total number of users who interacted with the search results. Another graph demonstrated a noticeable improvement in the conversion rate of users on the new model compared to users on the old model. This suggested that the new search algorithm was more successful in driving desired actions and achieving business objectives.
Bounce Rate: Bounce rate measures the percentage of users who leave the app after viewing a single search result. A high bounce rate may indicate that the search results are not meeting user expectations or are not relevant to their needs. The graph displaying the bounce rate showcased a significant decrease for users on the new model compared to users on the old model. This indicated that the new search algorithm was successful in reducing user bounce rates, implying improved search result relevance and user satisfaction.
Zero-Result Rate: This metric measures the percentage of searches that yield no results. A lower zero-result rate indicates that the search algorithm is effectively returning relevant results for a wide range of user queries.
In addition to these metrics, Shreyash introduced the Mean Reciprocal Rank (MRR) as a metric to evaluate the average ranking of the search result clicked by users. MRR takes into account the order of the clicked search results and provides a measure of the effectiveness of the search algorithm in ranking the most relevant results at the top. A higher MRR score indicates a better search result ranking and greater search result relevance.
Shreyash explained further, "MRR is calculated by taking the reciprocal of the rank of the first relevant search result clicked by the user. For example, if a user clicks on the most relevant result as the third item displayed, the reciprocal rank would be 1/3. A higher MRR score indicates that users are finding more relevant results higher up in the search rankings."
Rohit's suspicions about the search functionality and relevance began to dissipate as the metrics provided evidence of the new search model's effectiveness. Shreyash's analysis compared users on the new search model to users on the old model, revealing positive improvements in CTR, conversion rate, average time on page, and reduced bounce rates.
As the conversation progressed, Rohit couldn't help but express his concern about the potential biases in the data. He recognised that the new search model had not undergone an A/B test, which meant there could be hidden influences impacting the results. In an effort to ensure a comprehensive and unbiased analysis, Rohit shared his thoughts with Shreyash.
"Shreyash, while the metrics indicate positive results, we should be cautious about potential biases in the data. Without an A/B test, we need to carefully examine user cohorts and segment-specific behaviours to uncover any hidden influences on the drop in conversion rate. It's crucial that we dive deeper and gain a comprehensive understanding to make informed decisions," Rohit explained.
Shreyash listened attentively, understanding the importance of addressing potential biases and conducting a thorough analysis. They both agreed that exploring user cohorts, segment-specific search experiences, and delving deeper into the data would be essential to gain a clearer picture of the search dynamics and optimise the search experience for all users.
With a shared commitment to uncovering the truth and refining the search, Rohit and Shreyash embarked on the next phase of their investigation, eager to unveil the underlying factors that contributed to the gradual drop in conversion rate.
Unmasking the Bias: Investigating User Behaviour and App Adoption
As Rohit delved deeper into the analysis, he aimed to examine the possibility of any bias influencing the results. To assess this, he shifted his focus to user engagement and retention metrics that were not directly influenced by the search model.
Rohit scrutinised metrics such as sessions per user, the mix of new versus repeat users, and the average age of users on the platform. These metrics would help him understand if there were any inherent differences between the two cohorts: users on the new search model and users on the old model.
To his surprise, Rohit noticed a significant disparity between the two cohorts. Users on the new search model exhibited higher levels of engagement and transactional activity on the platform. However, Rohit questioned whether this improvement was solely due to the new model's performance or if it was a result of selection bias.
He realised that the users on the new model might appear to have better user metrics not because of the model's performance but because of the selection bias. It was possible that the users on the new model were of better quality to begin with, leading to better metrics.
Continuing his investigation, Rohit delved further into understanding the source of the observed bias. He carefully examined the circumstances surrounding the deployment of the new search model and its impact on user behaviour.
Rohit soon realised that the new model was introduced with a major app release, which included updates and enhancements beyond just the search functionality. As a result, users would only receive the new model if they actively updated their app to the latest version. This realisation led him to a significant insight—the bias stemmed from the fact that only active users, who were more likely to engage and transact on the platform, had updated to the new app version and were consequently using the new search model.
The bias, therefore, originated from the self-selection of users who actively updated their app. These users represented a segment of highly engaged and committed individuals, which naturally influenced the user metrics and potentially skewed the performance evaluation of the new search model.
With this newfound understanding, Rohit successfully identified the underlying cause of the bias. He gained a valuable insight into the significance of considering user behaviour and adoption patterns when evaluating the performance of the new model. This realisation prompted Rohit and Shreyash to recognise the importance of implementing an AB experiment, which would provide a robust framework for unbiased analysis and enable them to draw causal inferences.
They understood that conducting an AB experiment was crucial for any new updates or enhancements to negate the impact of biases and accurately assess the effectiveness of changes made. By incorporating this experimental approach, Rohit and Shreyash aimed to establish a causal relationship between the new search model and its impact on user engagement and conversion rates.
The AB Test: Deriving Causal Inference
Excited by their newfound insights and armed with the understanding of the bias, Avantika, Rohit, and Shreyash embarked on a crucial next step: conducting an AB experiment. They understood that this experiment would provide a rigorous and unbiased evaluation of the new search model's performance. With careful planning and implementation, they executed the experiment, comparing the user engagement and conversion rates between the users on the new search model and a control group on the old model.
To their surprise, the results of the AB experiment revealed that the new search model performed poorly in terms of user engagement and conversion rates. This outcome validated their initial concerns and highlighted the importance of thorough evaluation and experimentation before implementing major updates. Armed with this knowledge, Avantika, Rohit, and Shreyash were able to make informed decisions for optimising the search experience and driving improved conversion rates.
Key Discoveries and Conclusive Actions
The collaborative efforts of Avantika, Rohit, and Shreyash highlighted the importance of leveraging data-driven insights, conducting thorough analysis through funnel and cohort examination, and embracing experimentation to drive innovation and enhance user experiences. By meticulously dissecting the user journey through funnel analysis, they gained valuable insights into the specific stages where the drop in conversion rate occurred, enabling them to identify the root cause with precision. Furthermore, by segmenting users into cohorts and analysing their behaviours, they were able to uncover hidden influences and nuances that contributed to the overall performance of the search model.
As they concluded their investigation, Avantika, Rohit, and Shreyash eagerly shared their findings and recommendations with the wider team, stressing the significance of funnel and cohort analysis in understanding user behaviour and identifying areas for improvement. They emphasised the need for ongoing monitoring and analysis of key metrics throughout the user journey to ensure a comprehensive understanding of the entire conversion funnel.
In addition to their focus on funnel and cohort analysis, Avantika, Rohit, and Shreyash recognised the vital role of AB experimentation in deriving causal inferences and drawing accurate conclusions. They highlighted the importance of conducting controlled experiments, where users are randomly assigned to different versions of the search model, to measure the true impact of any changes made. By implementing AB testing, they could negate biases and confidently evaluate the performance of the new search model, ultimately leading to more informed decision-making and continuous improvement.
Avantika, Rohit, and Shreyash's collaborative efforts established a framework for unbiased evaluation, hypothesis testing, and deriving causal inferences through AB experimentation. Their achievements have set a precedent for future endeavours, ensuring that funnel and cohort analysis, coupled with rigorous AB testing, will remain integral to the company's growth and success in the dynamic world of online retail.