Experimentation Under Constraint
By the time we began thinking seriously about experimentation at Svaram, one thing was already clear: this could not look like conventional CRO.
Svaram was not a startup trying to squeeze incremental lifts out of a single product. It was a living ecosystem of instruments spread across radically different price points, use cases, cultural meanings, and decision cycles. Treating all of them with the same optimization logic would have been not just ineffective, but actively harmful.
The catalogue itself forced a reckoning.
At one extreme were the magnum opus instruments: large installations, the Sonorium, the Soundgarden, and eventually the Sonic Stone. These were not personal purchases. They were institutional decisions. They involved committees, budgets, spatial planning, and long gestation periods.
At the other extreme were small, almost playful objects: clay whistles, reed whistles, tiny percussive instruments. Between these poles sat a wide middle ground of flutes, ocarinas, metallophones, wind chimes, and other instruments that could plausibly be bought by individuals.
There were a hundred places to begin. And almost all of them were wrong.
The first instinct was to lean on existing data. Svaram had twenty years of offline sales history. Surely that could inform an online strategy. But it quickly became apparent that this data was deeply skewed. Offline sales were dominated by international institutional buyers and resellers. The motivations, contexts, and constraints of those buyers bore little resemblance to an Indian individual encountering a Svaram instrument for the first time on Instagram.
Our earliest posts and reels reflected this mismatch. They were earnest, well-produced, and largely ineffective.
That failure became useful.
This is where experimentation entered, not as a quest for statistical certainty, but as a way to discipline intuition.
With an Instagram following that ranged between 5,000 to 7,000 at the time, any illusion of statistical significance had to be abandoned early. These were not experiments designed to prove truth. They were probes designed to reveal direction. Each test was run patiently, often over three to four weeks, allowing patterns to repeat or fade. Data did not dictate decisions, but it did prevent blind projection.
One of the earliest experiments was deceptively simple.
For a flute carousel, we tested two framings. One emphasized aesthetics: polish, texture, craftsmanship. The other emphasized usability: ease of play, portability, low learning curve. This was not a copy test so much as a question about desire. Was the flute being approached as an object of beauty, or as an accessible entry into music‑making?
To make this concrete, the two descriptions looked roughly like this:
The answer, in this case, leaned clearly toward usability.
Encouraged, we repeated the same hypothesis on a different product: a golden brass plate. Here, the result inverted. Aesthetic framing performed better than portability.
Again, grounding the difference helped clarify why.
As experimentation continued, clearer fault lines began to appear, particularly around gifting products.
Wind chimes became a revealing case study. Early sponsored posts described them in neutral, almost technical terms: Indian raga‑tuned, handmade, unique. Performance was unremarkable. Conversations with customers, however, told a different story. Indian buyers spoke about Vastu, energy alignment, and auspiciousness. Western buyers spoke about novelty, craftsmanship, and conversation value.
The product had not changed. The reason for purchase had.
Again, to ground this insight, the descriptions diverged intentionally.
We split the campaigns geographically. Indian audiences saw language rooted in Vastu and Feng Shui. International audiences saw language rooted in novelty and cultural uniqueness. The effect was immediate and directional enough to matter. For the first time, experimentation was not just correcting copy, but exposing cultural logic.
At this point, it became necessary to formalize what the experiments were already implying.
The first category consisted of instruments priced below roughly sixty dollars. These were never subjected to paid experimentation. Margins did not justify it. Instead, they played a quieter role: maintaining freshness, accessibility, and warmth across social media. They carried charm, not revenue targets.
The second category sat between roughly eighty and three hundred dollars. Flutes, ocarinas, metallophones, wind chimes. These were personal purchases. Consideration cycles were manageable. Framing mattered. This is where experimentation earned its keep. Here, Meta became a legitimate demand-generation channel, not because of targeting sophistication, but because content met context.
The third category began around four to five hundred dollars and extended upward into large installations, the Sonorium, and the Sonic Stone. For these, experimentation revealed its own limits. Sponsored posts produced interest but not decisions. Buying cycles were long. Purchases were institutional. Attempts to optimize felt not just ineffective, but inappropriate.
This is where CRO gave way to ABM.
For high-ticket instruments, the unit of analysis changed. Impressions and clicks receded. Conversations, credibility, and patience took their place. Outreach became deliberate. Silence became expected. Relationships mattered more than reach.
What emerged was not a rejection of data, but a clearer sense of where data was allowed to speak, and where it needed to stay quiet.
In retrospect, the most important outcome of this phase was not improved performance metrics. It was judgment. The ability to recognize that different products demand different systems, and that optimization is not a universal virtue.