com

Support Communication During Conversation







com

Roy Horn of 'Siegfried and Roy' Dies of COVID-19 Complications

Roy Horn, famed tiger handler and co-star of the magic duo known as Siegfried and Roy, died of complications from the coronavirus in a hospital in Las Vegas on Friday. He was 75 years old. "Today, the world has lost one of the greats of magic, but I have lost my best friend," Siegfried Fischbacher said in a statement. "From the moment we met, I knew Roy and I, together, would change the world." "There could be no Siegfried without Roy, and no Roy without Siegfried." This is a developing story. Please check back for updates.




com

GOP Plans to Spend at least $20 million to Combat Voting Rights Lawsuits

The Republican National Committee and President Donald Trump's reelection campaign have doubled their litigation budget to $20 million, Politico reported Thursday. RNC chief of staff Richard Walters told Politico that the GOP is prepared to sue Democrats "into oblivion" by spending "whatever is necessary" to prevail in legal fights against its rivals leading up to the November election.




com

The Introvert Advantage with Beth Comstock

Even though I’m an extrovert, I have a feeling the future favors the introvert. Beth Comstock was at the CreativeLive studios in Seattle and I could not help but snag her for a quick moment to pick her brain on one of the most popular topics on my channel — navigating an extroverted world as an introvert. As a self-described introvert, Beth knows what it’s like to find elevate your strengths and have the courage to defend your creative ideas. Beth was named one of the most powerful women in business. After leaving a 27 year career at GE as their Chief Marketing Officer and Vice Chair, she decided to got a completely different direction to focus on new areas such as writing, art, exploration, and discovery. In this episode, Beth shares her advice to embrace your nature, and bring those strengths to any client, team, or situation. Enjoy! If you dig the show, please give a shout out to Beth on social and let her know. ???? FOLLOW BETH: twitter | instagram | website Listen to the Podcast  Subscribe   This podcast is brought to you by CreativeLive. CreativeLive is the world’s largest hub for online creative education in […]

The post The Introvert Advantage with Beth Comstock appeared first on Chase Jarvis Photography.




com

Getting Comfortable with Being Uncomfortable

Being uncomfortable isn’t usually fun. In fact, we’re probably more likely to try to avoid uncomfortable situations than actually run toward them. Yet, it is a valuable skill. Not only in dealing with adversity but giving us confidence and trust in ourselves to recover quickly from failure, manage our fears, and explore the unknown. In today’s episode, we dive a big deeper into this topic and I share a few ways we can all practice getting comfortable with being uncomfortable. Enjoy! FOLLOW CHASE: instagram | twitter | website Listen to the Podcast Subscribe   This podcast is brought to you by CreativeLive. CreativeLive is the world’s largest hub for online creative education in photo/video, art/design, music/audio, craft/maker, money/life and the ability to make a living in any of those disciplines. They are high quality, highly curated classes taught by the world’s top experts — Pulitzer, Oscar, Grammy Award winners, New York Times best selling authors and the best entrepreneurs of our times.

The post Getting Comfortable with Being Uncomfortable appeared first on Chase Jarvis Photography.




com

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

com

Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

git checkout feature
# ... bunch of feature commits ...
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# deploy qa-environment to the QA remote environment
# ... more feature commits ...
# oh. I'm not committing in the feature branch like I should be

and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Maybe you prefer

git branch -D qa-environment
git checkout qa-environment

over

git checkout qa-environment
git reset --hard origin/qa-environment

Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

Shorthands for longevity

I like to use all possible natively supported shorthands. There are two broad motivations for that.

  1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
  2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

Bash
# USING -

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -        # ????
git push
# hack hack hack
# whoops
git checkout -        # now on feature ???? 
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout - # now on qa-environment ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -        # ????
git checkout -                       # ????
# on feature and ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

We cannot use - when cherry-picking a range

> git cherry-pick origin/-..-
fatal: bad revision 'origin/-..-'

> git cherry-pick origin/qa-environment..-
fatal: bad revision 'origin/qa-environment..-'

and even if we could we'd still have provide the remote's name (here, origin).

That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

# assuming that branch-a has an upstream origin/branch-a
> git checkout branch-a
> git checkout branch-b
> git checkout -
> git branch -D -
error: branch '-' not found.
> git branch -D branch-a
> git checkout -
error: pathspec '-' did not match any file(s) known to git

So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

@{-<n>} is hard to say but easy to fall in love with

We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

> git checkout branch-a
> git checkout branch-b
> git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
branch-a
> git checkout branch-c
> git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
branch-a

Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

git cherry-pick origin/qa-environment..qa-environment

We can do

git cherry-pick origin/qa-environment..@{-1}

Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

Bash
# USING - AND @{-1}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -                # ????
git push
# hack hack hack
# whoops
git checkout -                               # ????
git cherry-pick origin/qa-environment..@{-1} # ????
git push
git checkout -                               # ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -                # ????
git checkout -                               # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

@{-1} is the ref itself, not the ref's name, we can't do

> git cherry-pick origin/@{-1}..@{-1}
origin/@{-1}
fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

But good news!

Do @{u} @{push}

@{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard origin/branch-a
HEAD is now at <the SHA origin/branch-a is at>

we can

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard @{u}                                # <-- So Cool!
HEAD is now at <the SHA origin/branch-a is at>

Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

git checkout branch-a@{u}

is the branch branch-a pulls from.

In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

Bash
# USING - AND @{-1} AND @{u}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -    # ????
git push
# hack hack hack
# whoops
git checkout -                   # ????
git cherry-pick @{-1}@{u}..@{-1} # ????????
git push
git checkout -                   # ????
git reset --hard @{u}            # ????
git merge --no-ff --no-edit -    # ????
git checkout -                   # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Make the things you repeat the easiest to do

Because these commands are generalized, we can run some series of them once, maybe

git checkout - && git reset --hard @{u} && git checkout -

or

git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

And keep going

The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



    • Code
    • Front-end Engineering
    • Back-end Engineering

    com

    CLI Equivalents for Common MAMP PRO and Sequel Pro Tasks

    Working on website front ends I sometimes use MAMP PRO to manage local hosts and Sequel Pro to manage databases. Living primarily in my text editor, a terminal, and a browser window, moving to these click-heavy dedicated apps can feel clunky. Happily, the tasks I have most frequently turned to those apps for —starting and stopping servers, creating new hosts, and importing, exporting, deleting, and creating databases— can be done from the command line.

    I still pull up MAMP PRO if I need to change a host's PHP version or work with its other more specialized settings, or Sequel Pro to quickly inspect a database, but for the most part I can stay on the keyboard and in my terminal. Here's how:

    Command Line MAMP PRO

    You can start and stop MAMP PRO's servers from the command line. You can even do this when the MAMP PRO desktop app isn't open.

    Note: MAMP PRO's menu icon will not change color to reflect the running/stopped status when the status is changed via the command line.

    • Start the MAMP PRO servers:
    /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd startServers
    • Stop the MAMP PRO servers:
    /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd stopServers
    • Create a host (replace host_name and root_path):
    /Applications/MAMP PRO.app/Contents/MacOS/MAMP PRO cmd createHost host_name root_path

    MAMP PRO-friendly Command Line Sequel Pro

    Note: if you don't use MAMP PRO, just replace the /Applications/MAMP/Library/bin/mysql with mysql.

    In all of the following commands, replace username with your user name (locally this is likely root) and database_name with your database name. The -p (password) flag with no argument will trigger an interactive password prompt. This is more secure than including your password in the command itself (like -pYourPasswordHere). Of course, if you're using the default password root is not particular secure to begin with so you might just do -pYourPasswordHere.

    Setting the -h (host) flag to localhost or 127.0.0.1 tells mysql to look at what's on localhost. With the MAMP PRO servers running, that will be the MAMP PRO databases.

    # with the MAMP PRO servers running, these are equivalent:
    # /Applications/MAMP/Library/bin/mysql -h 127.0.0.1 other_options
    # and
    # /Applications/MAMP/Library/bin/mysql -h localhost other_options
    
    /Applications/MAMP/Library/bin/mysql mysql_options # enter. opens an interactive mysql session
    mysql> some command; # don't forget the semicolon
    mysql> exit;
    • Create a local database
    # with the MAMP PRO servers running
    # replace `username` with your username, which is `root` by default
    /Applications/MAMP/Library/bin/mysql -h localhost -u username -p -e "create database database_name"

    or

    # with the MAMP PRO servers running
    # replace `username` (`root` by default) and `database_name`
    /Applications/MAMP/Library/bin/mysql -h localhost -u username -p # and then enter
    mysql> create database database_name; # don't forget the semicolon
    mysql> exit

        MAMP PRO's databases are stored in /Library/Application Support/appsolute/MAMP PRO/db so to confirm that it worked you can

    ls /Library/Application Support/appsolute/MAMP PRO/db
    # will output the available mysql versions. For example I have
    mysql56_2018-11-05_16-25-13     mysql57
    
    # If it isn't clear which one you're after, open the main MAMP PRO and click
    # on the MySQL "servers and services" item. In my case it shows "Version: 5.7.26"
    
    # Now look in the relevant MySQL directory
    ls /Library/Application Support/appsolute/MAMP PRO/db/mysql57
    # the newly created database should be in the list
    • Delete a local database
    # with the MAMP PRO servers running
    # replace `username` (`root` by default) and `database_name`
    /Applications/MAMP/Library/bin/mysql -h localhost -u username -p -e "drop database database_name"
    • Export a dump of a local database. Note that this uses mysqldump not mysql.
    # to export an uncompressed file
    # replace `username` (`root` by default) and `database_name`
    /Applications/MAMP/Library/bin/mysqldump -h localhost -u username -p database_name > the/output/path.sql
    
    # to export a compressed file
    # replace `username` (`root` by default) and `database_name`
    /Applications/MAMP/Library/bin/mysqldump -h localhost -u username -p database_name | gzip -c > the/output/path.gz

    • Export a local dump from an external database over SSH. Note that this uses mysqldump not mysql.

    # replace `ssh-user`, `ssh_host`, `mysql_user`, `database_name`, and the output path
    
    # to end up with an uncompressed file
    ssh ssh_user@ssh_host "mysqldump -u mysql_user -p database_name | gzip -c" | gunzip > the/output/path.sql
    
    # to end up with a compressed file
    ssh ssh_user@ssh_host "mysqldump -u mysql_user -p database_name | gzip -c" > the/output/path.gz
    • Import a local database dump into a local database
    # with the MAMP PRO servers running
    # replace `username` (`root` by default) and `database_name`
    /Applications/MAMP/Library/bin/mysql -h localhost -u username -p database_name < the/dump/path.sql
    • Import a local database dump into a remote database over SSH. Use care with this one. But if you are doing it with Sequel Pro —maybe you are copying a Craft site's database from a production server to a QA server— you might as well be able to do it on the command line.
    ssh ssh_user@ssh_host "mysql -u username -p remote_database_name" < the/local/dump/path.sql


    For me, using the command line instead of the MAMP PRO and Sequel Pro GUI means less switching between keyboard and mouse, less opening up GUI features that aren't typically visible on my screen, and generally better DX. Give it a try! And while MAMP Pro's CLI is limited to the essentials, command line mysql of course knows no limits. If there's something else you use Sequel Pro for, you may be able to come up with a mysql CLI equivalent you like even better.



    • Code
    • Front-end Engineering
    • Back-end Engineering

    com

    Implementing Dark Mode In React Apps Using styled-components

    One of the most commonly requested software features is dark mode (or night mode, as others call it). We see dark mode in the apps that we use every day. From mobile to web apps, dark mode has become vital for companies that want to take care of their users’ eyes. Dark mode is a supplemental feature that displays mostly dark surfaces in the UI. Most major companies (such as YouTube, Twitter, and Netflix) have adopted dark mode in their mobile and web apps.




    com

    A Complete Guide To Mechanical Keyboards

    About six years ago, a colleague I’ll call Tom, because that’s his name, forwarded me a link to the ‘WASD CODE’; a keyboard focused on the needs of programmers, designed with the help of Stack Overflow’s Jeff Atwood. I had no idea at the time that there were people actually dedicating themselves to creating keyboards beyond the stock fare shipping with computers. As I read and re-read the blurb, I was smitten.




    com

    Google Lens now copies handwritten text and pastes it straight to your computer

    Are there still folks among you who, like me, prefer handwriting to typing? If you’re in this group, you’ll love this new feature on Google Lens. The app now lets you scan your handwritten notes, copy them, and paste them straight to your computer. I gave it a spin, and I bring you my impressions […]

    The post Google Lens now copies handwritten text and pastes it straight to your computer appeared first on DIY Photography.





    com

    The entropy of holomorphic correspondences: exact computations and rational semigroups. (arXiv:2004.13691v1 [math.DS] CROSS LISTED)

    We study two notions of topological entropy of correspondences introduced by Friedland and Dinh-Sibony. Upper bounds are known for both. We identify a class of holomorphic correspondences whose entropy in the sense of Dinh-Sibony equals the known upper bound. This provides an exact computation of the entropy for rational semigroups. We also explore a connection between these two notions of entropy.




    com

    Regular Tur'an numbers of complete bipartite graphs. (arXiv:2005.02907v2 [math.CO] UPDATED)

    Let $mathrm{rex}(n, F)$ denote the maximum number of edges in an $n$-vertex graph that is regular and does not contain $F$ as a subgraph. We give lower bounds on $mathrm{rex}(n, F)$, that are best possible up to a constant factor, when $F$ is one of $C_4$, $K_{2,t}$, $K_{3,3}$ or $K_{s,t}$ when $t>s!$.




    com

    Complete reducibility: Variations on a theme of Serre. (arXiv:2004.14604v2 [math.GR] UPDATED)

    In this note, we unify and extend various concepts in the area of $G$-complete reducibility, where $G$ is a reductive algebraic group. By results of Serre and Bate--Martin--R"{o}hrle, the usual notion of $G$-complete reducibility can be re-framed as a property of an action of a group on the spherical building of the identity component of $G$. We show that other variations of this notion, such as relative complete reducibility and $sigma$-complete reducibility, can also be viewed as special cases of this building-theoretic definition, and hence a number of results from these areas are special cases of more general properties.




    com

    Equivalence of classical and quantum completeness for real principal type operators on the circle. (arXiv:2004.07547v3 [math.AP] UPDATED)

    In this article, we prove that the completeness of the Hamilton flow and essential self-dajointness are equivalent for real principal type operators on the circle. Moreover, we study spectral properties of these operators.




    com

    The $kappa$-Newtonian and $kappa$-Carrollian algebras and their noncommutative spacetimes. (arXiv:2003.03921v2 [hep-th] UPDATED)

    We derive the non-relativistic $c oinfty$ and ultra-relativistic $c o 0$ limits of the $kappa$-deformed symmetries and corresponding spacetime in (3+1) dimensions, with and without a cosmological constant. We apply the theory of Lie bialgebra contractions to the Poisson version of the $kappa$-(A)dS quantum algebra, and quantize the resulting contracted Poisson-Hopf algebras, thus giving rise to the $kappa$-deformation of the Newtonian (Newton-Hooke and Galilei) and Carrollian (Para-Poincar'e, Para-Euclidean and Carroll) quantum symmetries, including their deformed quadratic Casimir operators. The corresponding $kappa$-Newtonian and $kappa$-Carrollian noncommutative spacetimes are also obtained as the non-relativistic and ultra-relativistic limits of the $kappa$-(A)dS noncommutative spacetime. These constructions allow us to analyze the non-trivial interplay between the quantum deformation parameter $kappa$, the curvature parameter $eta$ and the speed of light parameter $c$.




    com

    Co-Seifert Fibrations of Compact Flat Orbifolds. (arXiv:2002.12799v2 [math.GT] UPDATED)

    In this paper, we develop the theory for classifying all the geometric fibrations of compact, connected, flat $n$-orbifolds, over a 1-orbifold, up to affine equivalence. We apply our classification theory to classify all the geometric fibrations of compact, connected, flat $2$-orbifolds, over a 1-orbifold, up to affine equivalence. This paper is an essential part of our project to give a geometric proof of the classification of all closed flat 4-manifolds.




    com

    Locally equivalent Floer complexes and unoriented link cobordisms. (arXiv:1911.03659v4 [math.GT] UPDATED)

    We show that the local equivalence class of the collapsed link Floer complex $cCFL^infty(L)$, together with many $Upsilon$-type invariants extracted from this group, is a concordance invariant of links. In particular, we define a version of the invariants $Upsilon_L(t)$ and $ u^+(L)$ when $L$ is a link and we prove that they give a lower bound for the slice genus $g_4(L)$. Furthermore, in the last section of the paper we study the homology group $HFL'(L)$ and its behaviour under unoriented cobordisms. We obtain that a normalized version of the $upsilon$-set, introduced by Ozsv'ath, Stipsicz and Szab'o, produces a lower bound for the 4-dimensional smooth crosscap number $gamma_4(L)$.




    com

    Compact manifolds of dimension $ngeq 12$ with positive isotropic curvature. (arXiv:1909.12265v4 [math.DG] UPDATED)

    We prove the following result: Let $(M,g_0)$ be a compact manifold of dimension $ngeq 12$ with positive isotropic curvature. Then $M$ is diffeomorphic to a spherical space form, or a compact quotient manifold of $mathbb{S}^{n-1} imes mathbb{R}$ by diffeomorphisms, or a connected sum of a finite number of such manifolds. This extends a recent work of Brendle, and implies a conjecture of Schoen in dimensions $ngeq 12$. The proof uses Ricci flow with surgery on compact orbifolds with isolated singularities.




    com

    Convolutions on the complex torus. (arXiv:1908.11815v3 [math.RA] UPDATED)

    "Quasi-elliptic" functions can be given a ring structure in two different ways, using either ordinary multiplication, or convolution. The map between the corresponding standard bases is calculated and given by Eisenstein series. A related structure has appeared recently in the computation of Feynman integrals. The two approaches are related by a sequence of polynomials with interlacing zeroes.




    com

    Bernoulli decomposition and arithmetical independence between sequences. (arXiv:1811.11545v2 [math.NT] UPDATED)

    In this paper we study the following set[A={p(n)+2^nd mod 1: ngeq 1}subset [0.1],] where $p$ is a polynomial with at least one irrational coefficient on non constant terms, $d$ is any real number and for $ain [0,infty)$, $a mod 1$ is the fractional part of $a$. By a Bernoulli decomposition method, we show that the closure of $A$ must have full Hausdorff dimension.




    com

    On the Total Curvature and Betti Numbers of Complex Projective Manifolds. (arXiv:1807.11625v2 [math.DG] UPDATED)

    We prove an inequality between the sum of the Betti numbers of a complex projective manifold and its total curvature, and we characterize the complex projective manifolds whose total curvature is minimal. These results extend the classical theorems of Chern and Lashof to complex projective space.




    com

    Off-diagonal estimates for bi-commutators. (arXiv:2005.03548v1 [math.CA])

    We study the bi-commutators $[T_1, [b, T_2]]$ of pointwise multiplication and Calder'on-Zygmund operators, and characterize their $L^{p_1}L^{p_2} o L^{q_1}L^{q_2}$ boundedness for several off-diagonal regimes of the mixed-norm integrability exponents $(p_1,p_2) eq(q_1,q_2)$. The strategy is based on a bi-parameter version of the recent approximate weak factorization method.




    com

    A reaction-diffusion system to better comprehend the unlockdown: Application of SEIR-type model with diffusion to the spatial spread of COVID-19 in France. (arXiv:2005.03499v1 [q-bio.PE])

    A reaction-diffusion model was developed describing the spread of the COVID-19 virus considering the mean daily movement of susceptible, exposed and asymptomatic individuals. The model was calibrated using data on the confirmed infection and death from France as well as their initial spatial distribution. First, the system of partial differential equations is studied, then the basic reproduction number, R0 is derived. Second, numerical simulations, based on a combination of level-set and finite differences, shown the spatial spread of COVID-19 from March 16 to June 16. Finally, scenarios of unlockdown are compared according to variation of distancing, or partially spatial lockdown.




    com

    On completion of unimodular rows over polynomial extension of finitely generated rings over $mathbb{Z}$. (arXiv:2005.03485v1 [math.AC])

    In this article, we prove that if $R$ is a finitely generated ring over $mathbb{Z}$ of dimension $d, dgeq2, frac{1}{d!}in R$, then any unimodular row over $R[X]$ of length $d+1$ can be mapped to a factorial row by elementary transformations.




    com

    Derivatives of normal Jacobi operator on real hypersurfaces in the complex quadric. (arXiv:2005.03483v1 [math.DG])

    In cite{S 2017}, Suh gave a non-existence theorem for Hopf real hypersurfaces in the complex quadric with parallel normal Jacobi operator. Motivated by this result, in this paper, we introduce some generalized conditions named $mathcal C$-parallel or Reeb parallel normal Jacobi operators. By using such weaker parallelisms of normal Jacobi operator, first we can assert a non-existence theorem of Hopf real hypersurfaces with $mathcal C$-parallel normal Jacobi operator in the complex quadric $Q^{m}$, $m geq 3$. Next, we prove that a Hopf real hypersurface has Reeb parallel normal Jacobi operator if and only if it has an $mathfrak A$-isotropic singular normal vector field.




    com

    A note on Penner's cocycle on the fatgraph complex. (arXiv:2005.03414v1 [math.GT])

    We study a 1-cocycle on the fatgraph complex of a punctured surface introduced by Penner. We present an explicit cobounding cochain for this cocycle, whose formula involves a summation over trivalent vertices of a trivalent fatgraph spine. In a similar fashion, we express the symplectic form of the underlying surface of a given fatgraph spine.




    com

    A regularity criterion of the 3D MHD equations involving one velocity and one current density component in Lorentz. (arXiv:2005.03377v1 [math.AP])

    In this paper, we study the regularity criterion of weak solutions to the three-dimensional (3D) MHD equations. It is proved that the solution $(u,b)$ becomes regular provided that one velocity and one current density component of the solution satisfy% egin{equation} u_{3}in L^{frac{30alpha }{7alpha -45}}left( 0,T;L^{alpha ,infty }left( mathbb{R}^{3} ight) ight) ext{ with }frac{45}{7}% leq alpha leq infty , label{eq01} end{equation}% and egin{equation} j_{3}in L^{frac{2eta }{2eta -3}}left( 0,T;L^{eta ,infty }left( mathbb{R}^{3} ight) ight) ext{ with }frac{3}{2}leq eta leq infty , label{eq02} end{equation}% which generalize some known results.




    com

    Riemann-Hilbert approach and N-soliton formula for the N-component Fokas-Lenells equations. (arXiv:2005.03319v1 [nlin.SI])

    In this work, the generalized $N$-component Fokas-Lenells(FL) equations, which have been studied by Guo and Ling (2012 J. Math. Phys. 53 (7) 073506) for $N=2$, are first investigated via Riemann-Hilbert(RH) approach. The main purpose of this is to study the soliton solutions of the coupled Fokas-Lenells(FL) equations for any positive integer $N$, which have more complex linear relationship than the analogues reported before. We first analyze the spectral analysis of the Lax pair associated with a $(N+1) imes (N+1)$ matrix spectral problem for the $N$-component FL equations. Then, a kind of RH problem is successfully formulated. By introducing the special conditions of irregularity and reflectionless case, the $N$-soliton solution formula of the equations are derived through solving the corresponding RH problem. Furthermore, take $N=2,3$ and $4$ for examples, the localized structures and dynamic propagation behavior of their soliton solutions and their interactions are discussed by some graphical analysis.




    com

    On the Incomparability of Systems of Sets of Lengths. (arXiv:2005.03316v1 [math.AC])

    Let $H$ be a Krull monoid with finite class group $G$ such that every class contains a prime divisor. We consider the system $mathcal L (H)$ of all sets of lengths of $H$ and study when $mathcal L (H)$ contains or is contained in a system $mathcal L (H')$ of a Krull monoid $H'$ with finite class group $G'$, prime divisors in all classes and Davenport constant $mathsf D (G')=mathsf D (G)$. Among others, we show that if $G$ is either cyclic of order $m ge 7$ or an elementary $2$-group of rank $m-1 ge 6$, and $G'$ is any group which is non-isomorphic to $G$ but with Davenport constant $mathsf D (G')=mathsf D (G)$, then the systems $mathcal L (H)$ and $mathcal L (H')$ are incomparable.




    com

    On a kind of self-similar sets with complete overlaps. (arXiv:2005.03280v1 [math.DS])

    Let $E$ be the self-similar set generated by the {it iterated function system} {[ f_0(x)=frac{x}{eta},quad f_1(x)=frac{x+1}{eta}, quad f_{eta+1}=frac{x+eta+1}{eta} ]}with $etage 3$. {Then} $E$ is a self-similar set with complete {overlaps}, i.e., $f_{0}circ f_{eta+1}=f_{1}circ f_1$, but $E$ is not totally self-similar.

    We investigate all its generating iterated function systems, give the spectrum of $E$, and determine the Hausdorff dimension and Hausdorff measure of $E$ and of the sets which contain all points in $E$ having finite or infinite different triadic codings.




    com

    Smooth non-projective equivariant completions of affine spaces. (arXiv:2005.03277v1 [math.AG])

    In this paper we construct an equivariant embedding of the affine space $mathbb{A}^n$ with the translation group action into a complete non-projective algebraic variety $X$ for all $n geq 3$. The theory of toric varieties is used as the main tool for this construction. In the case of $n = 3$ we describe the orbit structure on the variety $X$.




    com

    Non-relativity of K"ahler manifold and complex space forms. (arXiv:2005.03208v1 [math.CV])

    We study the non-relativity for two real analytic K"ahler manifolds and complex space forms of three types. The first one is a K"ahler manifold whose polarization of local K"ahler potential is a Nash function in a local coordinate. The second one is the Hartogs domain equpped with two canonical metrics whose polarizations of the K"ahler potentials are the diastatic functions.




    com

    Generalized Cauchy-Kovalevskaya extension and plane wave decompositions in superspace. (arXiv:2005.03160v1 [math-ph])

    The aim of this paper is to obtain a generalized CK-extension theorem in superspace for the bi-axial Dirac operator. In the classical commuting case, this result can be written as a power series of Bessel type of certain differential operators acting on a single initial function. In the superspace setting, novel structures appear in the cases of negative even superdimensions. In these cases, the CK-extension depends on two initial functions on which two power series of differential operators act. These series are not only of Bessel type but they give rise to an additional structure in terms of Appell polynomials. This pattern also is present in the structure of the Pizzetti formula, which describes integration over the supersphere in terms of differential operators. We make this relation explicit by studying the decomposition of the generalized CK-extension into plane waves integrated over the supersphere. Moreover, these results are applied to obtain a decomposition of the Cauchy kernel in superspace into monogenic plane waves, which shall be useful for inverting the super Radon transform.




    com

    Jealousy-freeness and other common properties in Fair Division of Mixed Manna. (arXiv:2004.11469v2 [cs.GT] UPDATED)

    We consider a fair division setting where indivisible items are allocated to agents. Each agent in the setting has strictly negative, zero or strictly positive utility for each item. We, thus, make a distinction between items that are good for some agents and bad for other agents (i.e. mixed), good for everyone (i.e. goods) or bad for everyone (i.e. bads). For this model, we study axiomatic concepts of allocations such as jealousy-freeness up to one item, envy-freeness up to one item and Pareto-optimality. We obtain many new possibility and impossibility results in regard to combinations of these properties. We also investigate new computational tasks related to such combinations. Thus, we advance the state-of-the-art in fair division of mixed manna.




    com

    The growth rate over trees of any family of set defined by a monadic second order formula is semi-computable. (arXiv:2004.06508v3 [cs.DM] UPDATED)

    Monadic second order logic can be used to express many classical notions of sets of vertices of a graph as for instance: dominating sets, induced matchings, perfect codes, independent sets or irredundant sets. Bounds on the number of sets of any such family of sets are interesting from a combinatorial point of view and have algorithmic applications. Many such bounds on different families of sets over different classes of graphs are already provided in the literature. In particular, Rote recently showed that the number of minimal dominating sets in trees of order $n$ is at most $95^{frac{n}{13}}$ and that this bound is asymptotically sharp up to a multiplicative constant. We build on his work to show that what he did for minimal dominating sets can be done for any family of sets definable by a monadic second order formula.

    We first show that, for any monadic second order formula over graphs that characterizes a given kind of subset of its vertices, the maximal number of such sets in a tree can be expressed as the extit{growth rate of a bilinear system}. This mostly relies on well known links between monadic second order logic over trees and tree automata and basic tree automata manipulations. Then we show that this "growth rate" of a bilinear system can be approximated from above.We then use our implementation of this result to provide bounds on the number of independent dominating sets, total perfect dominating sets, induced matchings, maximal induced matchings, minimal perfect dominating sets, perfect codes and maximal irredundant sets on trees. We also solve a question from D. Y. Kang et al. regarding $r$-matchings and improve a bound from G'orska and Skupie'n on the number of maximal matchings on trees. Remark that this approach is easily generalizable to graphs of bounded tree width or clique width (or any similar class of graphs where tree automata are meaningful).




    com

    Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016. (arXiv:2004.06286v3 [cs.HC] UPDATED)

    A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. Electroencephalograms (EEGs) used in BCIs are weak, easily contaminated by interference and noise, non-stationary for the same subject, and varying across different subjects and sessions. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This paper reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications -- motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks -- are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the paper, which may point to future research directions.




    com

    Unsupervised Domain Adaptation on Reading Comprehension. (arXiv:1911.06137v4 [cs.CL] UPDATED)

    Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate this issue, we are going to investigate unsupervised domain adaptation on RC, wherein a model is trained on labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, the performance is still unsatisfactory when the model trained on one dataset is directly applied to another target dataset. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable accuracy to supervised models on multiple large-scale benchmark datasets.




    com

    Over-the-Air Computation Systems: Optimization, Analysis and Scaling Laws. (arXiv:1909.00329v2 [cs.IT] UPDATED)

    For future Internet of Things (IoT)-based Big Data applications (e.g., smart cities/transportation), wireless data collection from ubiquitous massive smart sensors with limited spectrum bandwidth is very challenging. On the other hand, to interpret the meaning behind the collected data, it is also challenging for edge fusion centers running computing tasks over large data sets with limited computation capacity. To tackle these challenges, by exploiting the superposition property of a multiple-access channel and the functional decomposition properties, the recently proposed technique, over-the-air computation (AirComp), enables an effective joint data collection and computation from concurrent sensor transmissions. In this paper, we focus on a single-antenna AirComp system consisting of $K$ sensors and one receiver (i.e., the fusion center). We consider an optimization problem to minimize the computation mean-squared error (MSE) of the $K$ sensors' signals at the receiver by optimizing the transmitting-receiving (Tx-Rx) policy, under the peak power constraint of each sensor. Although the problem is not convex, we derive the computation-optimal policy in closed form. Also, we comprehensively investigate the ergodic performance of AirComp systems in terms of the average computation MSE and the average power consumption under Rayleigh fading channels with different Tx-Rx policies. For the computation-optimal policy, we prove that its average computation MSE has a decay rate of $O(1/sqrt{K})$, and our numerical results illustrate that the policy also has a vanishing average power consumption with the increasing $K$, which jointly show the computation effectiveness and the energy efficiency of the policy with a large number of sensors.




    com

    A Shift Selection Strategy for Parallel Shift-Invert Spectrum Slicing in Symmetric Self-Consistent Eigenvalue Computation. (arXiv:1908.06043v2 [math.NA] UPDATED)

    The central importance of large scale eigenvalue problems in scientific computation necessitates the development of massively parallel algorithms for their solution. Recent advances in dense numerical linear algebra have enabled the routine treatment of eigenvalue problems with dimensions on the order of hundreds of thousands on the world's largest supercomputers. In cases where dense treatments are not feasible, Krylov subspace methods offer an attractive alternative due to the fact that they do not require storage of the problem matrices. However, demonstration of scalability of either of these classes of eigenvalue algorithms on computing architectures capable of expressing massive parallelism is non-trivial due to communication requirements and serial bottlenecks, respectively. In this work, we introduce the SISLICE method: a parallel shift-invert algorithm for the solution of the symmetric self-consistent field (SCF) eigenvalue problem. The SISLICE method drastically reduces the communication requirement of current parallel shift-invert eigenvalue algorithms through various shift selection and migration techniques based on density of states estimation and k-means clustering, respectively. This work demonstrates the robustness and parallel performance of the SISLICE method on a representative set of SCF eigenvalue problems and outlines research directions which will be explored in future work.




    com

    An improved exact algorithm and an NP-completeness proof for sparse matrix bipartitioning. (arXiv:1811.02043v2 [cs.DS] UPDATED)

    We investigate sparse matrix bipartitioning -- a problem where we minimize the communication volume in parallel sparse matrix-vector multiplication. We prove, by reduction from graph bisection, that this problem is $mathcal{NP}$-complete in the case where each side of the bipartitioning must contain a linear fraction of the nonzeros.

    We present an improved exact branch-and-bound algorithm which finds the minimum communication volume for a given matrix and maximum allowed imbalance. The algorithm is based on a maximum-flow bound and a packing bound, which extend previous matching and packing bounds.

    We implemented the algorithm in a new program called MP (Matrix Partitioner), which solved 839 matrices from the SuiteSparse collection to optimality, each within 24 hours of CPU-time. Furthermore, MP solved the difficult problem of the matrix cage6 in about 3 days. The new program is on average more than ten times faster than the previous program MondriaanOpt.

    Benchmark results using the set of 839 optimally solved matrices show that combining the medium-grain/iterative refinement methods of the Mondriaan package with the hypergraph bipartitioner of the PaToH package produces sparse matrix bipartitionings on average within 10% of the optimal solution.




    com

    Identifying Compromised Accounts on Social Media Using Statistical Text Analysis. (arXiv:1804.07247v3 [cs.SI] UPDATED)

    Compromised accounts on social networks are regular user accounts that have been taken over by an entity with malicious intent. Since the adversary exploits the already established trust of a compromised account, it is crucial to detect these accounts to limit the damage they can cause. We propose a novel general framework for discovering compromised accounts by semantic analysis of text messages coming out from an account. Our framework is built on the observation that normal users will use language that is measurably different from the language that an adversary would use when the account is compromised. We use our framework to develop specific algorithms that use the difference of language models of users and adversaries as features in a supervised learning setup. Evaluation results show that the proposed framework is effective for discovering compromised accounts on social networks and a KL-divergence-based language model feature works best.




    com

    Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity. (arXiv:1706.02205v4 [math.NA] UPDATED)

    Dense kernel matrices $Theta in mathbb{R}^{N imes N}$ obtained from point evaluations of a covariance function $G$ at locations ${ x_{i} }_{1 leq i leq N} subset mathbb{R}^{d}$ arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and homogeneously-distributed sampling points, we show how to identify a subset $S subset { 1 , dots , N }^2$, with $# S = O ( N log (N) log^{d} ( N /epsilon ) )$, such that the zero fill-in incomplete Cholesky factorisation of the sparse matrix $Theta_{ij} 1_{( i, j ) in S}$ is an $epsilon$-approximation of $Theta$. This factorisation can provably be obtained in complexity $O ( N log( N ) log^{d}( N /epsilon) )$ in space and $O ( N log^{2}( N ) log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators; we further present numerical evidence that $d$ can be taken to be the intrinsic dimension of the data set rather than that of the ambient space. The algorithm only needs to know the spatial configuration of the $x_{i}$ and does not require an analytic representation of $G$. Furthermore, this factorization straightforwardly provides an approximate sparse PCA with optimal rate of convergence in the operator norm. Hence, by using only subsampling and the incomplete Cholesky factorization, we obtain, at nearly linear complexity, the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky factorization we also obtain a solver for elliptic PDE with complexity $O ( N log^{d}( N /epsilon) )$ in space and $O ( N log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators.




    com

    Learning Robust Models for e-Commerce Product Search. (arXiv:2005.03624v1 [cs.CL])

    Showing items that do not match search query intent degrades customer experience in e-commerce. These mismatches result from counterfactual biases of the ranking algorithms toward noisy behavioral signals such as clicks and purchases in the search logs. Mitigating the problem requires a large labeled dataset, which is expensive and time-consuming to obtain. In this paper, we develop a deep, end-to-end model that learns to effectively classify mismatches and to generate hard mismatched examples to improve the classifier. We train the model end-to-end by introducing a latent variable into the cross-entropy loss that alternates between using the real and generated samples. This not only makes the classifier more robust but also boosts the overall ranking performance. Our model achieves a relative gain compared to baselines by over 26% in F-score, and over 17% in Area Under PR curve. On live search traffic, our model gains significant improvement in multiple countries.




    com

    Computing with bricks and mortar: Classification of waveforms with a doped concrete blocks. (arXiv:2005.03498v1 [cs.ET])

    We present results showing the capability of concrete-based information processing substrate in the signal classification task in accordance with in materio computing paradigm. As the Reservoir Computing is a suitable model for describing embedded in materio computation, we propose that this type of presented basic construction unit can be used as a source for "reservoir of states" necessary for simple tuning of the readout layer. In that perspective, buildings constructed from computing concrete could function as a highly parallel information processor for smart architecture. We present an electrical characterization of the set of samples with different additive concentrations followed by a dynamical analysis of selected specimens showing fingerprints of memfractive properties. Moreover, on the basis of obtained parameters, classification of the signal waveform shapes can be performed in scenarios explicitly tuned for a given device terminal.




    com

    Brain-like approaches to unsupervised learning of hidden representations -- a comparative study. (arXiv:2005.03476v1 [cs.NE])

    Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The saliency and separability of the hidden representations when trained on MNIST dataset is studied using an external classifier, and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders.