content

Changing content type of a file in a FormData request with cURL

The other day I had to test an API response based on a file having the incorrect Content-Type of a FormData PUT request. After some digging in cURL I found the -F & --form flag to set the file and its content type. If you don’t specify the content type it will be inferred from … Continue reading "Changing content type of a file in a FormData request with cURL"




content

Actor Kishore hopes Kannada movie Kantara’s success will encourage directors to explore new content and presentation

Kannada actor Kishore hopes the response to ‘Kantara’ will encourage Kannada filmmakers to explore new content in different ways




content

Kannada cinema’s content crisis

Lack of quality writing — even in much-hyped latest films like Kranti and Kabzaa — is one of the biggest reasons behind the poor show of recent mainstream Kannada films




content

4 Entrepreneurs Give Their Smart Tips for Innovation | WIRED Brand Lab | BRANDED CONTENT

BRANDED CONTENT | Produced by WIRED Brand Lab and SELF for smartwater. Entrepreneurs Aminatou Sow, Joe Holder, Billie Whitehouse, and Jessica Nabongo share how innovation can have an impact on your success.




content

Kickstart Your Business Automate it! | BRANDED CONTENT

BRANDED CONTENT | Courtesy of LEGO® Education | One of the many lessons from LEGO® Education SPIKE™ Prime, Automate It! challenges students to create and program an automated helper that can identify and ship the correct package based on color.




content

Robust determination of low Ti contents in zircon using LA–ICP–MS/MS with NH3 reaction gas

J. Anal. At. Spectrom., 2024, Advance Article
DOI: 10.1039/D4JA00304G, Paper
Junlong Niu, Shitou Wu, Yueheng Yang, Hao Wang, Chao Huang, Lei Xu, Liewen Xie
Compared with the single quadrupole (SQ) mode, Triple Quadrupole (TQ) method is more robust for determining low Ti contents in zircon at high spatial resolutions (laser spot size <20 μm), making it useful for the analysis of complex zircon grains.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




content

The 5 Most Important Stages in the Content Marketing Process

Looking to implement a content marketing plan for your business, but not sure where to begin?

Used well, content can help you maximize connection with your target audience, and become a key resource in your niche. Your content can boost brand awareness, exposure, and eventually, lead to greater business success as potential customers become aware of your brand and its expertise.

So where do you begin?

The key to content success is mapping out a full strategy from the beginning, so you know not only what you're going to create, and where you'll share it. But also, what metrics you'll be using to measure its success, and how you can track performance over time.

An effective content strategy does take time to generate results, and if you have not aligned with the right metrics, you can come off the rails quickly.




content

Three ways to create a well-planned social marketing and content plan

Today, almost everyone is a critic, complete with an unfiltered opinion. Be it the clout chasers or active advocates, many take to social media to air these opinions and grievances where the views are met with either support or scorn; and the line between the two is often blurred.

To be present online, one must plan well and be equipped with a solid risk register to deal with any potential reputational damage that could occur. While a tweet may lead to brand tragedy, it could also lead to business popularity.

It's all dependent on metrics; a well-planned social marketing and content plan that considers all options should be viewed as your lifeline. Here are three ways to implement such a plan:

1. Make your metrics matter
2. Improve stakes and stakeholder management
3. Take online and offline conversations seriously




content

5 Steps to Creating Learner-Centered Content on Social Media

Think about learning. We all experience learning differently. Most of us took part in one or other forms of formal education — took a course, went to university or even attended a workshop. More often than not, we associate learning experiences with these formal settings. However, that’s not always the case; learning occurs throughout one’s life and well beyond the classroom.

In a world where social media is becoming part of our lives, it’s more relevant than ever to explore its potential as a learning and teaching tool. Research shows that social media and mobile devices can support learning and create a positive impact on both educators and learners.




content

5 Content Moderation Strategies for Social Media

Social media moderation services gradually dominated every online business and most industries. Thanks to the unceasing number of new customers flocking to use and try it, social media became a catalyst for introducing and sustaining big and small businesses today. Several entrepreneurs and business owners have taken to the digitally fueled networking channel to spark discussions with their customers and market their brand to the people they want to be a part of their growth and success.

What is social media moderation? And why is it essential for businesses at present? Simply, content moderation enumerates what needs to be carried out to review, check, and filter posts sent by your followers to align with your company’s branding and strategy.




content

Netflix’s Indian content garners over 1 billion views in 2023

Globally, non-English shows and movies made up nearly a third of all viewing, with significant shares for Korean, Spanish, and Japanese language stories




content

I&B Ministry summons Netflix content head over IC-814 series row

The depiction of hijackers of the Indian Airlines flight from Kathmandu to Delhi has kicked off a row with a section of viewers objecting to the 'humane' projection of the perpetrators




content

Sony Pics’ new CEO Ravi Ahuja believes in the power of content

Ahuja joined Sony in 2021 and oversaw a vast portfolio of productions, including ‘The Crown’, ‘Better Call Saul’, and ‘The Last of Us’




content

Summer of discontent for small investors

Whether it’s PMC, DHFL, CKP or Franklin Templeton, it’s a story of ordinary middle-class folk losing their life savings




content

fit-content and fit-content()

Today we will look at fit-content and fit-content(), which are special values for width and grid definitions. It’s ... complicated — not as a concept, but in its practical application.

min- and max-content

Before looking at fit-content we have to briefly review two other special width values: min-content and max-content. You need those in order to understand fit-content.

Normally (i.e. with width: auto defined or implied) a box will take as much horizontal space as it can. You can change the horizontal space by giving width a specifc value, but you can also order the browser to determine it from the box’s contents. That’s what min-content and max-content do.

Try them below.

min-content and max-content
width: auto: as much as possible
width: max-content
width: min-content
width: max-content with a long text that runs the danger of disappearing right out of the browser window if it continues for much longer
  • min-content means: the minimal width the box needs to contain its contents. In practice this means that browsers see which bit of content is widest and set the width of the box to that value. The widest bit of content is the longest word in a text, or the widest image or video or other asset.
  • max-content means: the width the box needs to contain all of its contents. In the case of text this means that all text is placed on one line and the box becomes as wide as necessary to contain that entire line. In general this is not a desirable effect. The largest bit of content may also be an image or video other asset; in that case browsers use this width to determine the box’s width.

If you use hyphens: auto or something similar, the browser will break the words at the correct hyphenation points before determining the minimal width. (I turned off hyphenation in the examples.)

Quick Chromia/Android rabbit hole

All Chromium-based browsers on Android (tested in Chrome (v90), Samsung Internet (v87), and Edge (v77)) break off the 'width: max-content' text in the example above at the dash, and thus take the 'width: max-' as the max-content, provided the page does NOT have a meta viewport. No other browser does this — and that includes Chrome on Mac.

Also, Chromia on Android make the font-size a tad smaller when you resize the boxes below the maximum possible width. I will ignore both bugs because this article is about fit-content, and not about these rabbit holes.

These bugs do NOT occur in UC (based on Chromium 78). Seems UC is taking its own decisions here, and is impervious to these particular bugs.

fit-content

Now that we understand these values we also understand fit-content. It is essentially a shorthand for the following:

box {
	width: auto;
	min-width: min-content;
	max-width: max-content;
}

Thus the box sizes with its containing box, but to a minimum of min-content and to a maximum of max-content.

fit-content as width, min-width, and max-width
width: fit-content: trying to find my fit
min-width: fit-content
max-width: fit-content

I’m not sure if this effect is useful outside a grid or flexbox context, but it’s here if you need it.

fit-content as min- or max-width

You can also use fit-content as a min-width or max-width value; see the example above. The first means that the width of the box varies between min-content and auto, while the second means it varies between 0 and max-content.

I find this fairly useless and potentially confusing. What you really mean is min-width: min-content or max-width: max-content. If that’s what you mean, say so. Your CSS will be clearer if you do.

So I believe that it would be better not to use fit-content for min-width or max-width; but only for width.

-moz-

Unfortunately, while fit-content works in all other browsers, Firefox still needs a vendor prefix. So the final code becomes:

box {
	width: -moz-fit-content;
	width: fit-content;
}

(These prefixes get harder and harder to defend as time goes by. fit-content has perfectly fine cross-browser support, so I don’t see why Firefox doesn’t just go over to the regular variant.)

fit-content in flexbox and grid: nope

fit-content does not work in flexbox and grid. In the example below the centre box has width: fit-content; it does not work. If it worked the middle box would have a width of max-content; i.e. as small as it needs to be to contain its text.

Flexbox with fit-content
Test content
fit-content
Test content

The final example on this page has a test where you can see grid doesn’t understand this keyword, either.

Note that grid and flex items have min-width: min-content by default, as you can see in the example above.

fit-content()

Let’s go to the more complex part: fit-content(). Although it’s supposed to work for a normal width, it doesn’t.

fit-content and fit-content() as width
width: fit-content: trying to find my fit
width: fit-content(200px)

Grid

You can use fit-content(value) in grid templates, like:

1fr fit-content(200px) 1fr
Grid with fit-content(200px)
Test content
fit-content(200px)
Test content

It means

1fr min(max-content-size, max(min-content, 200px)) 1fr

The max() argument becomes min-content or 200 pixels, whichever is larger. This is then compared to the maximum content size, which is the actual width available due to the constraints of the grid, but with a maximum of max-content. So the real formula is more like this one, where available-size is the available width in the grid:

1fr min(min(max-content,available-size), max(min-content, 200px)) 1fr

Some syntactic notes:

  • We’re talking fit-content() the function here. fit-content the keyword does not work in grid definitions.
  • Here Firefox does not need -moz-. Go figure.
  • fit-content() needs an argument; an empty function does not work. Also, an argument in fr units is invalid.
  • MDN mentions a value fit-content(stretch). It does not work anywhere, and isn’t referred to anywhere else. I assume it comes from an older version of the spec.

I tested most of these things in the following example, where you can also try the bits of syntax that do not work — maybe they’ll start working later.

And that’s fit-content and fit-content() for you. It’s useful in some situations.

Below you can play around with fit-content() in a grid.

Grid with controls

Set grid-template-columns: to

Test content
fit-content() with some more text



  • CSS for JavaScripters

content

Voice Content and Usability

We’ve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.

Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.

In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation “To whom it may concern,” for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.

Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.

Voice Interactions

We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because:

  • we need something done (such as a transaction),
  • we want to know something (information of some sort), or
  • we are social beings and want someone to talk to (conversation for conversation’s sake).

These three categories—which I call transactional, informational, and prosocial—also characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.

Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users’ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process (http://bkaprt.com/vcu36/01-01).

That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (“buy iced tea”) and an informational voice interaction teaching us something new (“discuss a musical”).

Transactional voice interactions

Unless you’re tapping buttons on a food delivery app, you’re generally having a conversation—and therefore a voice interaction—when you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).

Alison: Hey, how’s it going?

Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

Alison: Can I get a Hawaiian pizza with extra pineapple?

Burhan: Sure, what size?

Alison: Large.

Burhan: Anything else?

Alison: No thanks, that’s it.

Burhan: Something to drink?

Alison: I’ll have a bottle of Coke.

Burhan: You got it. That’ll be $13.55 and about fifteen minutes.

Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: they’re direct, to the point, and economical. They quickly dispense with pleasantries.

Informational voice interactions

Meanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, we’re after much more.

Alison: Hey, how’s it going?

Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

Alison: Can I ask a few questions?

Burhan: Of course! Go right ahead.

Alison: Do you have any halal options on the menu?

Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?

Alison: What about gluten-free pizzas?

Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?

Alison: That’s it for now. Good to know. Thanks!

Burhan: Anytime, come back soon!

This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truth—research expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.

Voice Interfaces

At their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, we’re most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.

Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

Interactive voice response (IVR) systems

Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.

IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (“Say Reservations to book a flight or check an itinerary”); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users’ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF).

While IVR systems are great for highly repetitive, monotonous conversations that generally don’t veer from a single format, they have a reputation for less scintillating conversation than we’re used to in real life (or even in science fiction).

Screen readers

Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.

Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04).

With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information,” writes Aaron Gustafson in A List Apart. “At least they do when documents are authored thoughtfully” (http://bkaprt.com/vcu36/01-05).

Though deeply instructive for voice interface designers, there’s one significant problem with screen readers: they’re difficult to use and unremittingly verbose. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.

In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:

From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06)

In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.

Voice assistants

When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.

Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web “agent” that would perform typical errands like “checking calendars, making appointments, and finding locations” (http://bkaprt.com/vcu36/01-07, behind paywall). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.

Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. Even today, it isn’t possible to program Siri to perform arbitrary functions, because there’s no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.

At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.

Fig 1.1: Voice assistants like Amazon Alexa and Google Home tend to be more programmable, and thus more flexible, than their counterpart Apple Siri.

As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, they’re also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.

Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. By contrast, many development platforms like Google’s Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.

Voice Content

Simply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and concise—everything written content isn’t.

Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, we’re most concerned with content delivered auditorily—not as an option, but as a necessity.

For many of us, our first foray into informational voice interfaces will be to deliver content to users. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?

Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:

A day’s weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08)

I’d update Dash’s definition of microcontent to include all examples of bite-sized content that go well beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.

As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.

Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.

Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.




content

A Content Model Is Not a Design System

Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.

But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content model—a definition of content types, attributes, and relationships that let people and systems understand content—with my more familiar design-system thinking would capsize my customer’s omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content. 

I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot delivery—designing content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces. 

A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic types—types named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.

Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy can’t rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.

Two essential principles for an effective content model

We needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:

  1. Content models must define semantics instead of layout.
  2. And content models should connect content that belongs together.

Semantic content models

A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they don’t help delivery channels understand the content’s meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit. 

When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.

A semantic content model has several benefits:

  • Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand disruptive website redesigns. 
  • A semantic content model also provides a competitive edge. By adding structured data based on Schema.org’s types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.
  • Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions.

For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.

Content models that connect

After struggling to describe what makes a good content model, I’ve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ item’s question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.

Think about writing an article or essay. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.

To illustrate, let’s look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make it as easy and as flexible as possible to add any number of tabs in the future?

Because our design-system instincts were so familiar, it felt like we had needed a content type called “tab section” so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources. 

Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. For example, how would another system have been able to tell which “tab section” referred to a product’s specifications or its resource list—would that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design system’s layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.

A content model based on design components is unnecessarily complex, and it’s unintelligible to systems.

We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.

In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content.

A good content model connects content that belongs together so it can be easily managed and reused.

Conclusion

In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if you’re working on a content model to support an omnichannel content strategy—or even if you just want to make sure that Google and other interfaces understand your content—remember:

  • A design system isn’t a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.
  • If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.org–based structured data in your website. Even if additional delivery channels aren’t on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.
  • Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They’ll be able to create new designs without the obstacle of compatibility between the design and the content, and ​they’ll be ready for the next big thing. 

By rigorously advocating for these principles, you’ll help your team treat content the way that it deserves—as the most critical asset in your user experience and the best way to connect with your audience.




content

A systematic review of green and sustainable chemistry training research with pedagogical content knowledge framework: current trends and future directions

Chem. Educ. Res. Pract., 2025, Advance Article
DOI: 10.1039/D4RP00166D, Review Article
Sevgi Aydin Gunbatar, Betul Ekiz Kiran, Yezdan Boz, Elif Selcan Oztay
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




content

A licence raj for digital content creators

The Broadcasting Bill, 2024 bears all the signs of being a digital authoritarianism project in order to control online narratives




content

Pak. suspends Nickelodeon over ‘Hindi content’

Licence suspended for airing cartoons dubbed in Hindi in violation of ban on ‘Indian content’




content

FIR against BJP for malicious content 2 days before Jharkhand polls

The cybercrime police station Ranchi wrote to a social media platform to remove the objectionable posts under Section 69(A) of the IT Act




content

Persuasion with Correlation Neglect: Media Power via Correlation of News Content [electronic journal].




content

The geography of EU discontent [electronic journal].




content

Corporate Profit Taxes, Capital Expenditure and Real Wages: The analytics behind a contentious debate [electronic journal].




content

High moisture content leaves cotton growers worried in some Indian States

Telangana and Maharashtra badly hit as CCI looks for low moisture content




content

Discrimination of Diptera order insects based on their saturated cuticular hydrocarbon content using a new microextraction procedure and chromatographic analysis

Anal. Methods, 2024, Accepted Manuscript
DOI: 10.1039/D4AY00214H, Paper
Open Access
Lixy Olinda León-Morán, Marta Pastor-Belda, Pilar Viñas, Natalia Arroyo-Manzanares, María Dolores García, María Isabel Arnaldos, Natalia Campillo
The nature and proportions of hydrocarbons in the cuticle of insects is characteristic of the species and age. Chemical analysis of cuticular hydrocarbons allows species discrimination, which is of great...
The content of this RSS Feed (c) The Royal Society of Chemistry




content

Filestack: A File Uploader and Powerful APIs to Deliver and Transform App Content

[Sponsored] If you’re building an app that requires a lot of user-generated content and media that needs to be processed, tagged, filtered, or otherwise manipulated in real-time, you definitely want a solution that’s fast and seamless and doesn’t get in the way of your app’s primary functionality. Filestack is a service you’ll want to consider. Here’s what Filestack offers:

The post Filestack: A File Uploader and Powerful APIs to Deliver and Transform App Content appeared first on Impressive Webs.




content

It is an offence: On Supreme Court clarification on online content on child sex abuse

Court has done well to clarify law on online content showing child sex abuse




content

558: Esoteric Weird Content Editable Problems with Kristin Valentine

Kristin Valentine from Vox joins the show to talk about text editor CMS fun across multiple sites, Vox's Chorus, The Verge redesign, sharing Design Systems, theming articles, and a fun new game called "Can Your Text Editor Do This??"




content

568: Display Contents, Passkeys Follow Up, Yellow Fade Technique, and TOTK Talk

Macho Man Randy Standards stops by for a quick chat, Passkeys follow up, discussing the safety of Display: contents, the yellow fade technique, how hot CSS is right now (so hot), and a check in on how everyone's doing with Tears of the Kingdom.




content

Data and its discontents

The quality of the statistics of a country is a measure of its goodness, say the authors




content

Paddy varieties with protein, zinc content get thumbs up in Odisha’s tribal pocket

Paddy varieties have high protein (10.1%) and moderately high level of zinc (20 ppm) content




content

Curriculum content is same across education boards: CISCE chief

He speaks with The Hindu regarding misconceptions about the school board, using technology to beat malpractices, and his plans to improve the quality of school education




content

The Different (and Modern) Ways to Toggle Content

Let’s spend some time looking at disclosures, the Dialog API, the Popover API, and more. We’ll look at the right time to use each one depending on your needs. Modal or non-modal? JavaScript or pure HTML/CSS? Not sure? Don’t worry, we’ll go into all that.


The Different (and Modern) Ways to Toggle Content originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.




content

Lacticaseibacillus paracasei JM053 alleviates osteoporosis in rats by increasing the content of soy isoflavone aglycones in fermented soymilk

Food Funct., 2024, Accepted Manuscript
DOI: 10.1039/D4FO04381B, Paper
Yaping Zheng, Shasha Cheng, Hongxuan Li, Yilin Sun, Ling Guo, Chaoxin Man, Yu Zhang, Wei Zhang, Yujun Jiang
Lacticaseibacillus paracasei JM053 has a high ability to convert soy isoflavones and can be used as a fermentation strain to ferment soymilk, thereby increasing the content of free aglycones in...
The content of this RSS Feed (c) The Royal Society of Chemistry




content

Sinner beats Fritz again in rematch of U.S. Open final, Medvedev back in contention at ATP Finals

Top-ranked Jannik Sinner has made it two wins in two matches before his home fans at the ATP Finals by beating Taylor Fritz 6-4, 6-4 in a rematch of the U.S. Open final




content

Syndicated Content

Many webmasters are struggling to find fresh, innovative content while other savvy webmasters have realized the potential hidden within RSS and are adopting the technology at a maddening pace. By utilizing RSS, webmasters can tap numerous free content sources with very little effort. RSS truly is a webmaster's key to free content.




content

Content Syndication Through RSS Feeds

RSS, also known as Rich Site Summary or Really Simple Syndication, has been used for years by online visitors. However, it has only recently begun to gain popularity among webmasters as a means of providing visitors with constantly refreshed content. These feeds were originally developed to deliver updated news more quickly, but they have since evolved to allow for nearly instantaneous updates of many types of information.

more on Content Syndication Through RSS Feeds




content

Re-Using Content

Repurposing content is not a terribly new concept. Webmasters that picked up on the trend have benefited from traffic surges for a while now. Repurposing content is all about presenting the same content in a variety of different ways, or using different mediums to present the same content. Webmasters can manipulate content in order to provide the same content in any number of different formats.

Re-Using Content




content

Sponsored content: How chemistry rocks music festivals

The science enables and enhances the all-encompassing live music experience. 




content

Sponsored content: Making lower auto emissions a reality

Though initially daunting, stricter fuel economy and emission standards result in automotive innovations




content

Sponsored content: Packing the right molecules for the great outdoors

Cutting-edge chemistry makes recreation in the wild more enjoyable—and more comfortable. 




content

Sponsored content: Interdisciplinary innovation

Chemists at ShanghaiTech University are championing academic collaboration and eschewing traditional subject-matter divisions




content

Invasion by content creators: A new threat to Pune sex workers




content

Original Content podcast: ‘Upload’ is a cheerful show about a nightmarish future

“Upload” feels like a slight, funny show — until you realize that without the jokes, the story would be unwatchably bleak. The Amazon Prime Video series (created by Greg Daniels of “The Office,” “Parks & Recreation” and the upcoming “Space Force”) takes place in a near future where people can upload digital copies of themselves […]




content

Content vs. Ecommerce




content

Coronavirus Impact: How broadcasters are dealing with content shortfall

From playing the reruns of popular shows to using video conferencing tools to create content, broadcasters are using different methods to keep the audience entertained during lockdown




content

Covid-19 effect: Content production companies stare at large losses

OTT platforms do have a buffer in their libraries but delayed production schedules will hit revenues.




content

Netflix sees ‘big growth’ in India as subscribers binge on local content

According to data released by market research firm Kalagato, user engagement on Netflix shot up to as much as 80 minutes a day as of March 28 from a little under 50 minutes on February 5.




content

Is Personalized Content Just What the Doctor Ordered? [Infographic]

For most marketers, personalization is the cure for ineffective content.