hy

When Was Photography Invented?

Its hard to imagine a time without photography. With access to small but powerful cameras that will fit in your pocket a normal occurrence in the world today, not being able to capture a moment seems like such an alien concept. There are over 95 million photos and videos shared on Instagram every single day but not too long ago Continue Reading

The post When Was Photography Invented? appeared first on Photodoto.



  • Cameras & Equipment
  • when photography invented

hy

Why Buy? Canon m50: A Real Review

Have you dreamed of a simple yet feature-rich portable camera? Something that’s a step up from compact cameras but still small enough to carry around in your day bag? A camera that will deliver the image quality of an entry-level DSLR without the bulk? Oh, and still be affordable? The Canon m50 mirrorless camera just might be your dream come Continue Reading

The post Why Buy? Canon m50: A Real Review appeared first on Photodoto.




hy

3 Common Photography Mistakes To Avoid

Image from Wikimedia With photography and image-based social media flourishing, there’s no better time to get into this amazing hobby. Although many people do just fine with little or no guidance, there are certain common mistakes which a lot of rookies run into. To give you a better start in the world of photography, you need to keep a keen eye out for these slip-ups. Here are three of the most widespread. If you want to pursue photography as a career, then one of the worst things you can do is neglect to learn the jargon. I’ve met more than a few photographers who have a natural talent. Without playing with any settings or even glancing at the subject, they get shots which make even the blandest scenes look incredible. With some of these protégés, I’ve been totally shocked at how little technical knowledge they have. A good photographer is ... Read more

The post 3 Common Photography Mistakes To Avoid appeared first on Digital Photography Tutorials.




hy

Why You Need Professional Wedding Photographer?

There’s a school of thought which says anyone with a camera could take pictures. Why should pay high to hire someone to do it for you, if you can simply whip out your phone or fancy digital camera and snap away? With regards to wedding photography there’s even of school of thought who says you simply get all your visitors or guests to take pictures and upload them to a website for everyone to see. That is great, if you have some visitors who are skilled with a camera and are ready to spend your memorable event looking through the lens, rather than enjoying the occasion. In general, you’re much better off putting yourselves in the hands of an expert wedding photographer, in order that you end up with an amazing set of pictures which do justice your very special day. Here are the reasons why you need to get ... Read more

The post Why You Need Professional Wedding Photographer? appeared first on Digital Photography Tutorials.




hy

Why Is Going Green So Hard? Because Our System Isn’t

By Jill Richardson Other Words If environmental solutions aren’t systemic, living green will always mean going against the grain — and usually failing. Every year around Earth Day, I’m reminded of papers I graded in an environmental sociology class. The … Continue reading




hy

Photography Tips: How To Create An Amazing Floating Image

You can do everything today. There are certainly no limits to what the mind can achieve, and that includes floating. With simply manipulating layers using Photoshop, a floating image has never been...




hy

Profession Of The Future: Why Is Programmer Education Still Relevant?

In 2020, there is no doubt that people and technology will be very close friends in the future. The modern inhabitant of our planet spends about 45% of his time connecting with technology. We are...




hy

Why Use A Digital Marketing Agency?

Outsourcing your marketing when you're running a small or medium sized business is often seen as an expensive option, one that can be done yourself. It might even be seen as something that's...




hy

Why Choosing The Best Web Hosting Is Crucial For Your Business

Not many business owners think about hosting when building a new website for their business. But failing to choose the right web hosting can have a great impact on your website and, of course, your...




hy

How To Create A Trustworthy Brand

The variables that determine business success are continually evolving. More than ever, customers want to be able to trust businesses with which they are transacting business. This involves building...




hy

Advanced Photography Tips And Hints

It is in every case critical to pay special mind to any computerized photography insights and tips. A few people can truly take awesome photos without truly trying, yet most of us need whatever...




hy

Why's it so hard to get the cool stuff approved?

The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

The fear of change. 

Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

The fear of the unknown. 

Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

    The fear of losing control. 

    We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

    So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

    What are low-fidelity prototypes?

    Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

    A rough animation created in Principle illustrating less programatic animation:

    And even creating an animated storyboard in Photoshop:

    They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

    That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

    It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

    And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

    Lo-fi prototypes: client-tested, internal team-approved

    There are a lot of reasons to love lo-fi prototypes internally, too!

    They’re quick and easy. 

    We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

    They’re great to share with developers. 

    Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

      Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



      • Design & Content

      hy

      A Viget Glossary: What We Mean and Why it Matters - Part 1

      Viget has helped organizations design and develop award-winning websites and digital products for 20 years. In that time, we’ve been lucky to create long-term relationships with clients like Puma, the World Wildlife Fund, and Privia Health, and, throughout our time working together, we’ve come to understand each others’ unique terminology. But that isn’t always the case when we begin work with new clients, and in a constantly-evolving industry, we know that new terminology appears almost daily and organizations have unique definitions for deliverables and processes.

      Kicking off a project always initiates a flurry of activity. There are contracts to sign, team members to introduce, and new platforms to learn. It’s an exciting time, and we know clients are anxious to get underway. Amidst all the activity, though, there is a need to define and create a shared lexicon to ensure both teams understand the project deliverables and process that will take us from kickoff to launch.

      Below, we’ve rounded up a few terms for each of our disciplines that often require additional explanation. Note: our definitions of these terms may differ slightly from the industry standard, but highlight our interpretation and use of them on a daily basis.

      User Experience

      Research

      In UX, there is a proliferation of terms that are often used interchangeably and mean almost-but-subtly-not the same thing. Viget uses the term research to specifically mean user research — learning more about the users of our products, particularly how they think and behave — in order to make stronger recommendations and better designs. This can be accomplished through different methodologies, depending on the needs of the project, and can include moderated usability testing, stakeholder interviews, audience research, surveys, and more. Learn more about the subtleties of UX research vocabulary in our post on “Speaking the Same Language About Research”.

      Wireframes

      We use wireframes to show the priority and organization of content on the screen, to give a sense of what elements will get a stronger visual treatment, and to detail how users will get to other parts of the site. Wireframes are a key component of website design — think of them as the skeleton or blueprint of a page — but we know that clients often feel uninspired after reviewing pages built with gray boxes. In fact, we’ve even written about how to improve wireframe presentations. We remind clients that visual designers will step in later to add polish through color, graphics, and typography, but agreeing on the foundation of the page is an important and necessary first step.

      Prototypes

      During the design process, it’s helpful for us to show clients how certain pieces of functionality or animations will work once the site is developed. We can mimic interactivity or test a technical proof of concept by using a clickable prototype, relying on tools like Figma, Invision, or Principle. Our prototypes can be used to illustrate a concept to internal stakeholders, but shouldn’t be seen as a final approach. Often, these concepts will require additional work to prepare them for developer handoff, which means that prototypes quickly become outdated. Read more about how and when we use prototypes.

      Navigation Testing (Treejack Testing)

      Following an information architecture presentation, we will sometimes recommend that clients conduct navigation testing. When testing, we present a participant with the proposed navigation and ask them to perform specific tasks in order to see if they will be able to locate the information specified within the site’s new organization. These tests generally focus on two aspects of the navigation: the structure of the navigation system itself, and the language used within the system. Treejack is an online navigation testing tool that we like to employ when conducting navigation tests, so we’ll often interchange the terms “navigation testing” with “treejack testing”.

      Learn more about Viget’s approach to user experience and research




      hy

      A Viget Glossary: What We Mean and Why It Matters - Part 2

      In my last post, I defined terms used by our UX team that are often confused or have multiple meanings across the industry. Today, I’ll share our definitions for processes and deliverables used by our design and strategy teams.

      Creative

      Brand Strategy

      In our experience, we’ve found that the term brand strategy is used to cover a myriad of processes, documents, and deliverables. To us, a brand strategy defines how an organization communicates who they are, what they do and why in a clear and compelling way. Over the years, we’ve developed an approach to brand strategy work that emphasizes rigorous research, hands-on collaboration, and the definition of problems and goals. We work with clients to align on a brand strategy concept and, depending on the client and their goals, our final deliverables can range to include strategy definition, audience-specific messaging, identity details, brand elements, applications, and more. Take a look at the brand strategy work we’ve done for Fiscalnote, Swiftdine, and Armstrong Tire.

      Content Strategy

      A content strategy goes far beyond the words on a website or in an app. A strong content strategy dictates the substance, structure, and governance of the information an organization uses to communicate to its audience. It guides creating, organizing, and maintaining content so that companies can communicate who they are, what they do, and why efficiently and effectively. We’ve worked with organizations like the Washington Speakers Bureau, The Nature Conservancy, the NFL Players Association, and the Wildlife Conservation Society to refine and enhance their content strategies.

      Still confused about the difference between brand and content strategy? Check out our flowchart.

      Style Guide vs. Brand Guidelines

      We often find the depth or fidelity of brand guidelines and style guides can vary greatly, and the terms can often be confused. When we create brand guidelines, they tend to be large documents that include in-depth recommendations about how a company should communicate their brand. Sections like “promise”, “vision”, “mission”, “values”, “tone”, etc. accompany details about how the brand’s logo, colors and fonts should be used in a variety of scenarios. Style guides, on the other hand, are typically pared down documents that contain specific guidance for organizations’ logos, colors and fonts, and don’t always include usage examples.

      Design System

      One question we get from clients often during a redesign or rebrand is, “How can I make sure people across my organization are adhering to our new designs?” This is where a design system comes into play. Design systems can range from the basic — e.g., a systematic approach to creating shared components for a single website — all the way to the complex —e.g., architecting a cross-product design system that can scale to accommodate hundreds of different products within a company. By assembling elements like color, typography, imagery, messaging, voice and tone, and interaction patterns in a central repository, organizations are able to scale products and marketing confidently and efficiently. When a design system is translated into code, we refer to that as a parts kit, which helps enforce consistency and improve workflow.

      Comps or Mocks

      When reviewing RFPs or going through the nitty-gritty of contracts with clients, we often see the terms mocks or comps used interchangeably to refer to the static design of pages or screens. Internally, we think of a mock-up as a static image file that illustrates proof-of-concept, just a step beyond a wireframe. A comp represents a design that is “high fidelity” and closer to what the final website will look like, though importantly, is not an exact replica. This is likely what clients will share with internal stakeholders to get approval on the website direction and what our front-end developers will use to begin building-out the site (in other words, converting the static design files into dynamic HTML, CSS, and JavaScript code).

      If you're interested in joining our team of creative thinkers and visual storytellers who bring these concepts to life for our clients, we’re hiring in Washington, D.C. Durham, Boulder and Chattanooga. Tune in next week as we decipher the terms we use most often when talking about development.




      hy

      Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

      Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

      As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

      Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

      That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

      In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

      Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

      Why we don't realize we’re muted before talking

      Why does this keep happening?!?

      Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

      Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

      We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

      Zoom and Teams are the worst offenders:

      • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
      • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

      Meet is only slightly better:

      • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

      Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

      Teams is the worst offender:

      • The mute button is an icon rather than words.
      • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
      • The mute button is not differentiated in visual hierarchy from all the other controls.
      • As mentioned above, Teams never shows a secondary mute status indicator.

      Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

      • Pros:
        • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
        • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
      • Cons:
        • The mute button’s placement (bottom left corner of the page) is easy to overlook.
        • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
        • The secondary mute status indicator is a small icon.
        • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
      • Potential Cons:
        • While words denote the button action, only an icon denotes the muted state.

      Meet is probably the clearest of the three apps, but still has pitfalls:

      • Pros:
        • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
        • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
      • Cons:
        • Uses only an icon rather than words to denote the muted state.
      • Unrelated Con:
        • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

      I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

      But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

      I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

      Why we keep unsuccessfully unmuting once we realize we’re muted

      So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

      This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

      The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

      If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

      There are two traps in this reflexive thought process:

      First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

      In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

      Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

      This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

      So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

      If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

      Video conference keyboard shortcuts you should memorize right now

      Let me know if you have other theories for the You’re Still Muted Problem!

      In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

      • Meet
        • Mac: Command(⌘) + D
        • Windows: Control + D
      • Teams
        • Mac: Command(⌘) + Shift + M
        • Windows: Ctrl + Shift + M
      • Zoom
        • Mac: Command(⌘) + Shift + A
        • Windows: Alt + A
        • Hold Spacebar: Temporarily unmute

      Other vidconf apps not included in my analysis:

      • Cisco Webex Meetings
        • Mac: Ctrl + Alt + M
        • Windows: Ctrl + Shift + M
      • GoToMeeting

      Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




      hy

      9 Convincing Reasons Why Designers Should Pursue Personal Projects

      Web designers have skills and expertise that open up a whole world of possibilities. Many designers and developers choose to pursue personal projects in their own time, which can be a nice change of...

      Click through to read the rest of the story on the Vandelay Design Blog.




      hy

      Why Reducing Our Carbon Emissions Matters

      By The Conversation While it’s true that Earth’s temperatures and carbon dioxide levels have always fluctuated, the reality is that humans’ greenhouse emissions since the industrial revolution have put us in uncharted territory. Written by Dr Benjamin Henley and Assoc … Continue reading




      hy

      Why Is Going Green So Hard? Because Our System Isn’t

      By Jill Richardson Other Words If environmental solutions aren’t systemic, living green will always mean going against the grain — and usually failing. Every year around Earth Day, I’m reminded of papers I graded in an environmental sociology class. The … Continue reading




      hy

      Talking to computers (part 1): Why is speech recognition so difficult?

      Although the performance of today's speech recognition systems is impressive, the experience for many is still one of errors, corrections, frustration and abandoning speech in favour of alternative interaction methods. We take a closer look at speech and find out why speech recognition is so difficult.




      hy

      What is cognitive load and why does it matter in web and interface design?

      Successful design manages cognitive load. Cognitive load is a technical term for “mental effort,” more specifically it’s the total amount of mental effort required for a given task. Completing any task requires some level of mental effort. This includes learning new information, analyzing stimuli, and working with short and long-term memory. Mental energy which has […]

      The post What is cognitive load and why does it matter in web and interface design? appeared first on Psychology of Web Design | 3.7 Blog.




      hy

      Why Your Website Doesn’t Generate Leads (and how to fix it)

      You’re homepage is beautifully designed. It’s clear all the ways you can help. You’ve articulated why someone should hire you. You’ve validated your claims through case studies and testimonials, yet… You’re not getting the volume of leads you need. Sure they trickle in every month, but it’s not enough to grow your business. What are […]

      The post Why Your Website Doesn’t Generate Leads (and how to fix it) appeared first on Psychology of Web Design | 3.7 Blog.




      hy

      Why personas are antiquated (and what you should use instead)

      Personas are antiquated… this coming from someone who has relied on and written about them for years. For years at 3.7 Designs, we’ve created personas during the design discovery phase. I recently realized that the traditional marketing persona is no longer a relevant practice. The keyword here is “traditional.” At 3.7 we’ve adopted a practice […]

      The post Why personas are antiquated (and what you should use instead) appeared first on Psychology of Web Design | 3.7 Blog.




      hy

      Cinematic Street Photography by Victor Cambet

      Cinematic Street Photography by Victor Cambet

      AoiroStudioMay 07, 2020

      Victor Cambet is a freelance graphic designer and an amazing photographer currently based in Montreal, QC. What initially caught my eyes on Victor's work is his perspective of how he sees things through his camera lenses. It's pure, raw, and cinematic street photography. That's one of the reasons why we decided to feature his work on ABDZ. Being a personal fan of Victor's, I have always enjoyed his shots from my hometown of Montreal (and still do). I have lived in this city for more than 30 years and it's quite a pleasant sentiment. Definitely check out his Instagram, you will get to follow the 'behind-the-scenes' stories and you will notice how Victor is passionate and patient with his photography. Make sure to give him some love.

      La rue est un film où chaque inconnu en devient le personnage principal.

      About Victor Cambet

      Victor is a freelance graphic designer currently based in my hometown of Montreal, Qc in Canada. You should definitely follow Victor and check out his store.

      View this post on Instagram

      La rue est un film...

      A post shared by Victor Cambet (@victorcambet) on

      View this post on Instagram

      L’homme au chapeau.

      A post shared by Victor Cambet (@victorcambet) on

      View this post on Instagram

      De l’ombre à la lumière.

      A post shared by Victor Cambet (@victorcambet) on

      View this post on Instagram

      Un regard.

      A post shared by Victor Cambet (@victorcambet) on

      View this post on Instagram

      Une silhouette dans la nuit.

      A post shared by Victor Cambet (@victorcambet) on

      View this post on Instagram

      À découvert.

      A post shared by Victor Cambet (@victorcambet) on






      hy

      Why's it so hard to get the cool stuff approved?

      The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

      Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

      So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

      Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

      There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

      But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

      The fear of change. 

      Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

      The fear of the unknown. 

      Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

        The fear of losing control. 

        We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

        So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

        What are low-fidelity prototypes?

        Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

        A rough animation created in Principle illustrating less programatic animation:

        And even creating an animated storyboard in Photoshop:

        They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

        That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

        It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

        And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

        Lo-fi prototypes: client-tested, internal team-approved

        There are a lot of reasons to love lo-fi prototypes internally, too!

        They’re quick and easy. 

        We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

        They’re great to share with developers. 

        Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

          Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



          • Design & Content

          hy

          A Viget Glossary: What We Mean and Why it Matters - Part 1

          Viget has helped organizations design and develop award-winning websites and digital products for 20 years. In that time, we’ve been lucky to create long-term relationships with clients like Puma, the World Wildlife Fund, and Privia Health, and, throughout our time working together, we’ve come to understand each others’ unique terminology. But that isn’t always the case when we begin work with new clients, and in a constantly-evolving industry, we know that new terminology appears almost daily and organizations have unique definitions for deliverables and processes.

          Kicking off a project always initiates a flurry of activity. There are contracts to sign, team members to introduce, and new platforms to learn. It’s an exciting time, and we know clients are anxious to get underway. Amidst all the activity, though, there is a need to define and create a shared lexicon to ensure both teams understand the project deliverables and process that will take us from kickoff to launch.

          Below, we’ve rounded up a few terms for each of our disciplines that often require additional explanation. Note: our definitions of these terms may differ slightly from the industry standard, but highlight our interpretation and use of them on a daily basis.

          User Experience

          Research

          In UX, there is a proliferation of terms that are often used interchangeably and mean almost-but-subtly-not the same thing. Viget uses the term research to specifically mean user research — learning more about the users of our products, particularly how they think and behave — in order to make stronger recommendations and better designs. This can be accomplished through different methodologies, depending on the needs of the project, and can include moderated usability testing, stakeholder interviews, audience research, surveys, and more. Learn more about the subtleties of UX research vocabulary in our post on “Speaking the Same Language About Research”.

          Wireframes

          We use wireframes to show the priority and organization of content on the screen, to give a sense of what elements will get a stronger visual treatment, and to detail how users will get to other parts of the site. Wireframes are a key component of website design — think of them as the skeleton or blueprint of a page — but we know that clients often feel uninspired after reviewing pages built with gray boxes. In fact, we’ve even written about how to improve wireframe presentations. We remind clients that visual designers will step in later to add polish through color, graphics, and typography, but agreeing on the foundation of the page is an important and necessary first step.

          Prototypes

          During the design process, it’s helpful for us to show clients how certain pieces of functionality or animations will work once the site is developed. We can mimic interactivity or test a technical proof of concept by using a clickable prototype, relying on tools like Figma, Invision, or Principle. Our prototypes can be used to illustrate a concept to internal stakeholders, but shouldn’t be seen as a final approach. Often, these concepts will require additional work to prepare them for developer handoff, which means that prototypes quickly become outdated. Read more about how and when we use prototypes.

          Navigation Testing (Treejack Testing)

          Following an information architecture presentation, we will sometimes recommend that clients conduct navigation testing. When testing, we present a participant with the proposed navigation and ask them to perform specific tasks in order to see if they will be able to locate the information specified within the site’s new organization. These tests generally focus on two aspects of the navigation: the structure of the navigation system itself, and the language used within the system. Treejack is an online navigation testing tool that we like to employ when conducting navigation tests, so we’ll often interchange the terms “navigation testing” with “treejack testing”.

          Learn more about Viget’s approach to user experience and research




          hy

          A Viget Glossary: What We Mean and Why It Matters - Part 2

          In my last post, I defined terms used by our UX team that are often confused or have multiple meanings across the industry. Today, I’ll share our definitions for processes and deliverables used by our design and strategy teams.

          Creative

          Brand Strategy

          In our experience, we’ve found that the term brand strategy is used to cover a myriad of processes, documents, and deliverables. To us, a brand strategy defines how an organization communicates who they are, what they do and why in a clear and compelling way. Over the years, we’ve developed an approach to brand strategy work that emphasizes rigorous research, hands-on collaboration, and the definition of problems and goals. We work with clients to align on a brand strategy concept and, depending on the client and their goals, our final deliverables can range to include strategy definition, audience-specific messaging, identity details, brand elements, applications, and more. Take a look at the brand strategy work we’ve done for Fiscalnote, Swiftdine, and Armstrong Tire.

          Content Strategy

          A content strategy goes far beyond the words on a website or in an app. A strong content strategy dictates the substance, structure, and governance of the information an organization uses to communicate to its audience. It guides creating, organizing, and maintaining content so that companies can communicate who they are, what they do, and why efficiently and effectively. We’ve worked with organizations like the Washington Speakers Bureau, The Nature Conservancy, the NFL Players Association, and the Wildlife Conservation Society to refine and enhance their content strategies.

          Still confused about the difference between brand and content strategy? Check out our flowchart.

          Style Guide vs. Brand Guidelines

          We often find the depth or fidelity of brand guidelines and style guides can vary greatly, and the terms can often be confused. When we create brand guidelines, they tend to be large documents that include in-depth recommendations about how a company should communicate their brand. Sections like “promise”, “vision”, “mission”, “values”, “tone”, etc. accompany details about how the brand’s logo, colors and fonts should be used in a variety of scenarios. Style guides, on the other hand, are typically pared down documents that contain specific guidance for organizations’ logos, colors and fonts, and don’t always include usage examples.

          Design System

          One question we get from clients often during a redesign or rebrand is, “How can I make sure people across my organization are adhering to our new designs?” This is where a design system comes into play. Design systems can range from the basic — e.g., a systematic approach to creating shared components for a single website — all the way to the complex —e.g., architecting a cross-product design system that can scale to accommodate hundreds of different products within a company. By assembling elements like color, typography, imagery, messaging, voice and tone, and interaction patterns in a central repository, organizations are able to scale products and marketing confidently and efficiently. When a design system is translated into code, we refer to that as a parts kit, which helps enforce consistency and improve workflow.

          Comps or Mocks

          When reviewing RFPs or going through the nitty-gritty of contracts with clients, we often see the terms mocks or comps used interchangeably to refer to the static design of pages or screens. Internally, we think of a mock-up as a static image file that illustrates proof-of-concept, just a step beyond a wireframe. A comp represents a design that is “high fidelity” and closer to what the final website will look like, though importantly, is not an exact replica. This is likely what clients will share with internal stakeholders to get approval on the website direction and what our front-end developers will use to begin building-out the site (in other words, converting the static design files into dynamic HTML, CSS, and JavaScript code).

          If you're interested in joining our team of creative thinkers and visual storytellers who bring these concepts to life for our clients, we’re hiring in Washington, D.C. Durham, Boulder and Chattanooga. Tune in next week as we decipher the terms we use most often when talking about development.




          hy

          Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

          Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

          As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

          Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

          That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

          In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

          Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

          Why we don't realize we’re muted before talking

          Why does this keep happening?!?

          Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

          Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

          We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

          Zoom and Teams are the worst offenders:

          • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
          • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

          Meet is only slightly better:

          • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

          Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

          Teams is the worst offender:

          • The mute button is an icon rather than words.
          • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
          • The mute button is not differentiated in visual hierarchy from all the other controls.
          • As mentioned above, Teams never shows a secondary mute status indicator.

          Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

          • Pros:
            • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
            • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
          • Cons:
            • The mute button’s placement (bottom left corner of the page) is easy to overlook.
            • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
            • The secondary mute status indicator is a small icon.
            • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
          • Potential Cons:
            • While words denote the button action, only an icon denotes the muted state.

          Meet is probably the clearest of the three apps, but still has pitfalls:

          • Pros:
            • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
            • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
          • Cons:
            • Uses only an icon rather than words to denote the muted state.
          • Unrelated Con:
            • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

          I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

          But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

          I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

          Why we keep unsuccessfully unmuting once we realize we’re muted

          So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

          This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

          The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

          If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

          There are two traps in this reflexive thought process:

          First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

          In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

          Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

          This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

          So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

          If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

          Video conference keyboard shortcuts you should memorize right now

          Let me know if you have other theories for the You’re Still Muted Problem!

          In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

          • Meet
            • Mac: Command(⌘) + D
            • Windows: Control + D
          • Teams
            • Mac: Command(⌘) + Shift + M
            • Windows: Ctrl + Shift + M
          • Zoom
            • Mac: Command(⌘) + Shift + A
            • Windows: Alt + A
            • Hold Spacebar: Temporarily unmute

          Other vidconf apps not included in my analysis:

          • Cisco Webex Meetings
            • Mac: Ctrl + Alt + M
            • Windows: Ctrl + Shift + M
          • GoToMeeting

          Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




          hy

          Why Collaborative Coding Is The Ultimate Career Hack

          Taking your first steps in programming is like picking up a foreign language. At first, the syntax makes no sense, the vocabulary is unfamiliar, and everything looks and sounds unintelligible. If you’re anything like me when I started, fluency feels impossible. I promise it isn’t. When I began coding, the learning curve hit me — hard. I spent ten months teaching myself the basics while trying to stave off feelings of self-doubt that I now recognize as imposter syndrome.




          hy

          Photography Life makes all their paid premium courses free

          Photography Life has just contributed to the selection of online courses that you can take for free. While their premim courses are normally paid $150 per course, you can now access them free of charge. The founders have released them on YouTube, available for everyone to watch. The Photography Life team came to the decision […]

          The post Photography Life makes all their paid premium courses free appeared first on DIY Photography.




          hy

          Unbounded Kobayashi hyperbolic domains in $mathbb C^n$. (arXiv:1911.05632v2 [math.CV] UPDATED)

          We first give a sufficient condition, issued from pluripotential theory, for an unbounded domain in the complex Euclidean space $mathbb C^n$ to be Kobayashi hyperbolic. Then, we construct an example of a rigid pseudoconvex domain in $mathbb C^3$ that is Kobayashi hyperbolic and has a nonempty core. In particular, this domain is not biholomorphic to a bounded domain in $mathbb C^3$ and the mentioned above sufficient condition for Kobayashi hyperbolicity is not necessary.




          hy

          Nonlinear stability of explicit self-similar solutions for the timelike extremal hypersurfaces in R^{1+3}. (arXiv:1907.01126v2 [math.AP] UPDATED)

          This paper is devoted to the study of the singularity phenomenon of timelike extremal hypersurfaces in Minkowski spacetime $mathbb{R}^{1+3}$. We find that there are two explicit lightlike self-similar solutions to a graph representation of timelike extremal hypersurfaces in Minkowski spacetime $mathbb{R}^{1+3}$, the geometry of them are two spheres. The linear mode unstable of those lightlike self-similar solutions for the radially symmetric membranes equation is given. After that, we show those self-similar solutions of the radially symmetric membranes equation are nonlinearly stable inside a strictly proper subset of the backward lightcone. This means that the dynamical behavior of those two spheres is as attractors. Meanwhile, we overcome the double roots case (the theorem of Poincar'{e} can't be used) in solving the difference equation by construction of a Newton's polygon when we carry out the analysis of spectrum for the linear operator.




          hy

          Derivatives of normal Jacobi operator on real hypersurfaces in the complex quadric. (arXiv:2005.03483v1 [math.DG])

          In cite{S 2017}, Suh gave a non-existence theorem for Hopf real hypersurfaces in the complex quadric with parallel normal Jacobi operator. Motivated by this result, in this paper, we introduce some generalized conditions named $mathcal C$-parallel or Reeb parallel normal Jacobi operators. By using such weaker parallelisms of normal Jacobi operator, first we can assert a non-existence theorem of Hopf real hypersurfaces with $mathcal C$-parallel normal Jacobi operator in the complex quadric $Q^{m}$, $m geq 3$. Next, we prove that a Hopf real hypersurface has Reeb parallel normal Jacobi operator if and only if it has an $mathfrak A$-isotropic singular normal vector field.




          hy

          Sums of powers of integers and hyperharmonic numbers. (arXiv:2005.03407v1 [math.NT])

          In this paper, we derive a formula for the sums of powers of the first $n$ positive integers that involves the hyperharmonic numbers and the Stirling numbers of the second kind. Then, using an explicit representation for the hyperharmonic numbers, we generalize this formula to the sums of powers of an arbitrary arithmetic progression. Moreover, as a by-product, we express the Bernoulli polynomials in terms of the hyperharmonic polynomials and the Stirling numbers of the second kind.




          hy

          Minimum pair degree condition for tight Hamiltonian cycles in $4$-uniform hypergraphs. (arXiv:2005.03391v1 [math.CO])

          We show that every 4-uniform hypergraph with $n$ vertices and minimum pair degree at least $(5/9+o(1))n^2/2$ contains a tight Hamiltonian cycle. This degree condition is asymptotically optimal.




          hy

          Evaluating the phase dynamics of coupled oscillators via time-variant topological features. (arXiv:2005.03343v1 [physics.data-an])

          The characterization of phase dynamics in coupled oscillators offers insights into fundamental phenomena in complex systems. To describe the collective dynamics in the oscillatory system, order parameters are often used but are insufficient for identifying more specific behaviors. We therefore propose a topological approach that constructs quantitative features describing the phase evolution of oscillators. Here, the phase data are mapped into a high-dimensional space at each time point, and topological features describing the shape of the data are subsequently extracted from the mapped points. We extend these features to time-variant topological features by considering the evolution time, which serves as an additional dimension in the topological-feature space. The resulting time-variant features provide crucial insights into the time evolution of phase dynamics. We combine these features with the machine learning kernel method to characterize the multicluster synchronized dynamics at a very early stage of the evolution. Furthermore, we demonstrate the usefulness of our method for qualitatively explaining chimera states, which are states of stably coexisting coherent and incoherent groups in systems of identical phase oscillators. The experimental results show that our method is generally better than those using order parameters, especially if only data on the early-stage dynamics are available.




          hy

          Generalized Cauchy-Kovalevskaya extension and plane wave decompositions in superspace. (arXiv:2005.03160v1 [math-ph])

          The aim of this paper is to obtain a generalized CK-extension theorem in superspace for the bi-axial Dirac operator. In the classical commuting case, this result can be written as a power series of Bessel type of certain differential operators acting on a single initial function. In the superspace setting, novel structures appear in the cases of negative even superdimensions. In these cases, the CK-extension depends on two initial functions on which two power series of differential operators act. These series are not only of Bessel type but they give rise to an additional structure in terms of Appell polynomials. This pattern also is present in the structure of the Pizzetti formula, which describes integration over the supersphere in terms of differential operators. We make this relation explicit by studying the decomposition of the generalized CK-extension into plane waves integrated over the supersphere. Moreover, these results are applied to obtain a decomposition of the Cauchy kernel in superspace into monogenic plane waves, which shall be useful for inverting the super Radon transform.




          hy

          Hydrodynamic limit of Robinson-Schensted-Knuth algorithm. (arXiv:2005.03147v1 [math.CO])

          We investigate the evolution in time of the position of a fixed number inthe insertion tableau when the Robinson-Schensted-Knuth algorithm is applied to asequence of random numbers. When the length of the sequence tends to infinity, a typical trajectory after scaling converges uniformly in probability to some deterministiccurve.




          hy

          Modeling nanoconfinement effects using active learning. (arXiv:2005.02587v2 [physics.app-ph] UPDATED)

          Predicting the spatial configuration of gas molecules in nanopores of shale formations is crucial for fluid flow forecasting and hydrocarbon reserves estimation. The key challenge in these tight formations is that the majority of the pore sizes are less than 50 nm. At this scale, the fluid properties are affected by nanoconfinement effects due to the increased fluid-solid interactions. For instance, gas adsorption to the pore walls could account for up to 85% of the total hydrocarbon volume in a tight reservoir. Although there are analytical solutions that describe this phenomenon for simple geometries, they are not suitable for describing realistic pores, where surface roughness and geometric anisotropy play important roles. To describe these, molecular dynamics (MD) simulations are used since they consider fluid-solid and fluid-fluid interactions at the molecular level. However, MD simulations are computationally expensive, and are not able to simulate scales larger than a few connected nanopores. We present a method for building and training physics-based deep learning surrogate models to carry out fast and accurate predictions of molecular configurations of gas inside nanopores. Since training deep learning models requires extensive databases that are computationally expensive to create, we employ active learning (AL). AL reduces the overhead of creating comprehensive sets of high-fidelity data by determining where the model uncertainty is greatest, and running simulations on the fly to minimize it. The proposed workflow enables nanoconfinement effects to be rigorously considered at the mesoscale where complex connected sets of nanopores control key applications such as hydrocarbon recovery and CO2 sequestration.




          hy

          Teaching Cameras to Feel: Estimating Tactile Physical Properties of Surfaces From Images. (arXiv:2004.14487v2 [cs.CV] UPDATED)

          The connection between visual input and tactile sensing is critical for object manipulation tasks such as grasping and pushing. In this work, we introduce the challenging task of estimating a set of tactile physical properties from visual information. We aim to build a model that learns the complex mapping between visual information and tactile physical properties. We construct a first of its kind image-tactile dataset with over 400 multiview image sequences and the corresponding tactile properties. A total of fifteen tactile physical properties across categories including friction, compliance, adhesion, texture, and thermal conductance are measured and then estimated by our models. We develop a cross-modal framework comprised of an adversarial objective and a novel visuo-tactile joint classification loss. Additionally, we develop a neural architecture search framework capable of selecting optimal combinations of viewing angles for estimating a given physical property.




          hy

          Decoding EEG Rhythms During Action Observation, Motor Imagery, and Execution for Standing and Sitting. (arXiv:2004.04107v2 [cs.HC] UPDATED)

          Event-related desynchronization and synchronization (ERD/S) and movement-related cortical potential (MRCP) play an important role in brain-computer interfaces (BCI) for lower limb rehabilitation, particularly in standing and sitting. However, little is known about the differences in the cortical activation between standing and sitting, especially how the brain's intention modulates the pre-movement sensorimotor rhythm as they do for switching movements. In this study, we aim to investigate the decoding of continuous EEG rhythms during action observation (AO), motor imagery (MI), and motor execution (ME) for standing and sitting. We developed a behavioral task in which participants were instructed to perform both AO and MI/ME in regard to the actions of sit-to-stand and stand-to-sit. Our results demonstrated that the ERD was prominent during AO, whereas ERS was typical during MI at the alpha band across the sensorimotor area. A combination of the filter bank common spatial pattern (FBCSP) and support vector machine (SVM) for classification was used for both offline and pseudo-online analyses. The offline analysis indicated the classification of AO and MI providing the highest mean accuracy at 82.73$pm$2.38\% in stand-to-sit transition. By applying the pseudo-online analysis, we demonstrated the higher performance of decoding neural intentions from the MI paradigm in comparison to the ME paradigm. These observations led us to the promising aspect of using our developed tasks based on the integration of both AO and MI to build future exoskeleton-based rehabilitation systems.




          hy

          Trees and Forests in Nuclear Physics. (arXiv:2002.10290v2 [nucl-th] UPDATED)

          We present a simple introduction to the decision tree algorithm using some examples from nuclear physics. We show how to improve the accuracy of the classical liquid drop nuclear mass model by performing Feature Engineering with a decision tree. Finally, we apply the method to the Duflo-Zuker model showing that, despite their simplicity, decision trees are capable of improving the description of nuclear masses using a limited number of free parameters.




          hy

          Eccentricity terrain of $delta$-hyperbolic graphs. (arXiv:2002.08495v2 [cs.DM] UPDATED)

          A graph $G=(V,E)$ is $delta$-hyperbolic if for any four vertices $u,v,w,x$, the two larger of the three distance sums $d(u,v)+d(w,x)$, $d(u,w)+d(v,x)$, and $d(u,x)+d(v,w)$ differ by at most $2delta geq 0$. Recent work shows that many real-world graphs have small hyperbolicity $delta$. This paper describes the eccentricity terrain of a $delta$-hyperbolic graph. The eccentricity function $e_G(v)=max{d(v,u) : u in V}$ partitions the vertex set of $G$ into eccentricity layers $C_{k}(G) = {v in V : e(v)=rad(G)+k}$, $k in mathbb{N}$, where $rad(G)=min{e_G(v): vin V}$ is the radius of $G$. The paper studies the eccentricity layers of vertices along shortest paths, identifying such terrain features as hills, plains, valleys, terraces, and plateaus. It introduces the notion of $eta$-pseudoconvexity, which implies Gromov's $epsilon$-quasiconvexity, and illustrates the abundance of pseudoconvex sets in $delta$-hyperbolic graphs. In particular, it shows that all sets $C_{leq k}(G)={vin V : e_G(v) leq rad(G) + k}$, $kin mathbb{N}$, are $(2delta-1)$-pseudoconvex. Additionally, several bounds on the eccentricity of a vertex are obtained which yield a few approaches to efficiently approximating all eccentricities. An $O(delta |E|)$ time eccentricity approximation $hat{e}(v)$, for all $vin V$, is presented that uses distances to two mutually distant vertices and satisfies $e_G(v)-2delta leq hat{e}(v) leq {e_G}(v)$. It also shows existence of two eccentricity approximating spanning trees $T$, one constructible in $O(delta |E|)$ time and the other in $O(|E|)$ time, which satisfy ${e}_G(v) leq e_T(v) leq {e}_G(v)+4delta+1$ and ${e}_G(v) leq e_T(v) leq {e}_G(v)+6delta$, respectively. Thus, the eccentricity terrain of a tree gives a good approximation (up-to an additive error $O(delta))$ of the eccentricity terrain of a $delta$-hyperbolic graph.




          hy

          Evolutionary Dynamics of Higher-Order Interactions. (arXiv:2001.10313v2 [physics.soc-ph] UPDATED)

          We live and cooperate in networks. However, links in networks only allow for pairwise interactions, thus making the framework suitable for dyadic games, but not for games that are played in groups of more than two players. To remedy this, we introduce higher-order interactions, where a link can connect more than two individuals, and study their evolutionary dynamics. We first consider a public goods game on a uniform hypergraph, showing that it corresponds to the replicator dynamics in the well-mixed limit, and providing an exact theoretical foundation to study cooperation in networked groups. We also extend the analysis to heterogeneous hypergraphs that describe interactions of groups of different sizes and characterize the evolution of cooperation in such cases. Finally, we apply our new formulation to study the nature of group dynamics in real systems, showing how to extract the actual dependence of the synergy factor on the size of a group from real-world collaboration data in science and technology. Our work is a first step towards the implementation of new actions to boost cooperation in social groups.




          hy

          Constrained Restless Bandits for Dynamic Scheduling in Cyber-Physical Systems. (arXiv:1904.08962v3 [cs.SY] UPDATED)

          Restless multi-armed bandits are a class of discrete-time stochastic control problems which involve sequential decision making with a finite set of actions (set of arms). This paper studies a class of constrained restless multi-armed bandits (CRMAB). The constraints are in the form of time varying set of actions (set of available arms). This variation can be either stochastic or semi-deterministic. Given a set of arms, a fixed number of them can be chosen to be played in each decision interval. The play of each arm yields a state dependent reward. The current states of arms are partially observable through binary feedback signals from arms that are played. The current availability of arms is fully observable. The objective is to maximize long term cumulative reward. The uncertainty about future availability of arms along with partial state information makes this objective challenging. Applications for CRMAB abound in the domain of cyber-physical systems. This optimization problem is analyzed using Whittle's index policy. To this end, a constrained restless single-armed bandit is studied. It is shown to admit a threshold-type optimal policy, and is also indexable. An algorithm to compute Whittle's index is presented. Further, upper bounds on the value function are derived in order to estimate the degree of sub-optimality of various solutions. The simulation study compares the performance of Whittle's index, modified Whittle's index and myopic policies.




          hy

          Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture. (arXiv:2005.03454v1 [cs.LG])

          Sparse models require less memory for storage and enable a faster inference by reducing the necessary number of FLOPs. This is relevant both for time-critical and on-device computations using neural networks. The stabilized lottery ticket hypothesis states that networks can be pruned after none or few training iterations, using a mask computed based on the unpruned converged model. On the transformer architecture and the WMT 2014 English-to-German and English-to-French tasks, we show that stabilized lottery ticket pruning performs similar to magnitude pruning for sparsity levels of up to 85%, and propose a new combination of pruning techniques that outperforms all other techniques for even higher levels of sparsity. Furthermore, we confirm that the parameter's initial sign and not its specific value is the primary factor for successful training, and show that magnitude pruning cannot be used to find winning lottery tickets.




          hy

          Probabilistic Hyperproperties of Markov Decision Processes. (arXiv:2005.03362v1 [cs.LO])

          We study the specification and verification of hyperproperties for probabilistic systems represented as Markov decision processes (MDPs). Hyperproperties are system properties that describe the correctness of a system as a relation between multiple executions. Hyperproperties generalize trace properties and include information-flow security requirements, like noninterference, as well as requirements like symmetry, partial observation, robustness, and fault tolerance. We introduce the temporal logic PHL, which extends classic probabilistic logics with quantification over schedulers and traces. PHL can express a wide range of hyperproperties for probabilistic systems, including both classical applications, such as differential privacy, and novel applications in areas such as robotics and planning. While the model checking problem for PHL is in general undecidable, we provide methods both for proving and for refuting a class of probabilistic hyperproperties for MDPs.