s

Low latency variable transfer network communicating variable written to source processing core variable register allocated to destination thread to destination processing core variable register allocated to source thread

A method and circuit arrangement utilize a low latency variable transfer network between the register files of multiple processing cores in a multi-core processor chip to support fine grained parallelism of virtual threads across multiple hardware threads. The communication of a variable over the variable transfer network may be initiated by a move from a local register in a register file of a source processing core to a variable register that is allocated to a destination hardware thread in a destination processing core, so that the destination hardware thread can then move the variable from the variable register to a local register in the destination processing core.




s

System for accessing a register file using an address retrieved from the register file

A data processing system and method are disclosed. The system comprises an instruction-fetch stage where an instruction is fetched and a specific instruction is input into decode stage; a decode stage where said specific instruction indicates that contents of a register in a register file are used as an index, and then, the register file pointed to by said index is accessed based on said index; an execution stage where an access result of said decode stage is received, and computations are implemented according to the access result of the decode stage.




s

Implementation of multi-tasking on a digital signal processor with a hardware stack

The disclosure relates to the implementation of multi-tasking on a digital signal processor. Blocking functions are arranged such that they do not make use of a processor's hardware stack. Respective function calls are replaced with a piece of inline assembly code, which instead performs a branch to the correct routine for carrying out said function. If a blocking condition of the blocking function is encountered, a task switch can be done to resume another task. Whilst the hardware stack is not used when a task switch might have to occur, mixed-up contents of the hardware stack among function calls performed by different tasks are avoided.




s

System and method for Controlling restarting of instruction fetching using speculative address computations

A system and method for controlling restarting of instruction fetching using speculative address computations in a processor are provided. The system includes a predicted target queue to hold branch prediction logic (BPL) generated target address values. The system also includes target selection logic including a recycle queue. The target selection logic selects a saved branch target value between a previously speculatively calculated branch target value from the recycle queue and an address value from the predicted target queue. The system further includes a compare block to identify a wrong target in response to a mismatch between the saved branch target value and a current calculated branch target, where instruction fetching is restarted in response to the wrong target.




s

Combined branch target and predicate prediction for instruction blocks

Embodiments provide methods, apparatus, systems, and computer readable media associated with predicting predicates and branch targets during execution of programs using combined branch target and predicate predictions. The predictions may be made using one or more prediction control flow graphs which represent predicates in instruction blocks and branches between blocks in a program. The prediction control flow graphs may be structured as trees such that each node in the graphs is associated with a predicate instruction, and each leaf associated with a branch target which jumps to another block. During execution of a block, a prediction generator may take a control point history and generate a prediction. Following the path suggested by the prediction through the tree, both predicate values and branch targets may be predicted. Other embodiments may be described and claimed.




s

Operand and limits optimization for binary translation system

Methods and systems for optimizing generation of natively executable code from non-native binary code are disclosed. One method includes receiving a source file including binary code configured for execution according to a non-native instruction set architecture. The method also includes translating one or more code blocks included in the executable binary code to source code, and applying an optimizing algorithm to instructions in the one or more code blocks. The optimizing algorithm is selected to reduce a number of memory address translations performed when translating the source code to native executable binary code, thereby resulting in one or more optimized code blocks. The method further includes compiling the source code to generate an output file comprising natively executable binary code including the one or more optimized code blocks.




s

APC model extension using existing APC models

A method of extending advanced process control (APC) models includes constructing an APC model table including APC model parameters of a plurality of products and a plurality of work stations. The APC model table includes empty cells and cells filled with existing APC model parameters. Average APC model parameters of the existing APC model parameters are calculated, and filled into the empty cells as initial values. An iterative calculation is performed to update the empty cells with updated values.




s

Executing machine instructions comprising input/output pairs of execution nodes

A computing machine is disclosed having a memory system for storing a collection of execution nodes, a head for reading a sequence of symbols in the execution nodes in the memory system, and writing a sequence of symbols in the memory system. The machine is configured to execute a computation with a collection of pairs of execution nodes. Each pair of execution nodes represents a machine instruction. One execution node in the pair represents input of the machine instruction represented by the execution nodes. Another execution node in the pair represents output of the machine instruction represented by the execution nodes. Each execution node has a state of the machine, a sequence of symbols and a number.




s

Detecting and reissuing of loop instructions in reorder structure

A processor for processing loop instructions can include an instruction reorder structure and a loop processing controller. The instruction reorder structure is configured to store decoded instructions according to program order and issue the decoded instructions for execution out of program order. The loop processing controller is configured to detect a loop in the decoded instructions stored in the instruction reorder structure and cause the instruction reorder structure to reissue the decoded instructions that form the loop for re-execution.




s

Method for activating processor cores within a computer system

A technique for activating processor cores within a computer system is disclosed. Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.




s

Client-allocatable bandwidth pools

Methods and apparatus for client-allocatable bandwidth pools are disclosed. A system includes a plurality of resources of a provider network and a resource manager. In response to a determination to accept a bandwidth pool creation request from a client for a resource group, where the resource group comprises a plurality of resources allocated to the client, the resource manager stores an indication of a total network traffic rate limit of the resource group. In response to a bandwidth allocation request from the client to allocate a specified portion of the total network traffic rate limit to a particular resource of the resource group, the resource manager initiates one or more configuration changes to allow network transmissions within one or more network links of the provider network accessible from the particular resource at a rate up to the specified portion.




s

Method and device for passing parameters between processors

The disclosure provides a method for passing a parameter between processors. The method comprises the following steps: in a source program of a slave processor, directly introducing a static configuration parameter to be passed; obtaining a relative address of the static configuration parameter when converting the source program of the slave processor into a target program of the slave processor; and configuring directly, by a master processor, a parameter value of the static configuration parameter in the target program of the slave processor according to the obtained relative address of the static configuration parameter. The disclosure also provides a system for passing a parameter between processors. The system has no need to use external hardware such as a dual-port Random Access Memory (RAM) and a register, thus, the requirement of parameter transmission on the external hardware is reduced, and further the area and static power consumption of a chip are reduced. The disclosure reduces the cycle delay of the slave processor in accessing the dual-port RAM and the register, thereby effectively reducing the dynamic power consumption of the chip, improving the processing capability of the slave processor and enhancing the effective performance of the slave processor.




s

Information processing apparatus for restricting access to memory area of first program from second program

A processor determines whether a first program is under execution when a second program is executed, and changes a setting of a memory management unit based on access prohibition information so that a fault occurs when the second program makes an access to a memory when the first program is under execution. Then, the processor determines whether an access from the second program to a memory area used by the first program is permitted based on memory restriction information when the fault occurs while the first program and the second program are under execution, and changes the setting of the memory management unit so that the fault does not occur when the access to the memory area is permitted.




s

Utilization of a microcode interpreter built in to a processor

Augmented processor hardware contains a microcode interpreter. When encrypted microcode is included in a message from a service, the microcode may be passed to the microcode interpreter. Based on decryption and execution of the microcode taking place at the processor hardware, extended functionality may be realized.




s

Instruction execution

A method of executing an instruction set including a first instruction and a second instruction, includes reading the first instruction; determining whether the first instruction is an instruction which is integral with the second instruction; reading the second instruction; if the first instruction is integral with the second instruction, interpreting the operand field of the second instruction to indicate at least one value to be used in conjunction with at least one bit of the first instruction; and if the first instruction is not integral with the second instruction, interpreting the operand field of the second instruction to indicate an entry of a look-up table.




s

Issue policy control within a multi-threaded in-order superscalar processor

A multi-threaded in-order superscalar processor 2 includes an issue stage 12 including issue circuitry 22, 24 for selecting instructions to be issued to execution units 14, 16 in dependence upon a currently selected issue policy. A plurality of different issue policies are provided by associated different policy circuitry 28, 30, 32 and a selection between which of these instances of the policy circuitry 28, 30, 32 is active is made by policy selecting circuitry 34 in dependence upon detected dynamic behavior of the processor 2.




s

Efficient conditional ALU instruction in read-port limited register file microprocessor

A microprocessor having performs an architectural instruction that instructs it to perform an operation on first and second source operands to generate a result and to write the result to a destination register only if its architectural condition flags satisfy a condition specified in the architectural instruction. A hardware instruction translator translates the instruction into first and second microinstructions. To execute the first microinstruction, an execution pipeline performs the operation on the source operands to generate the result. To execute the second microinstruction, it writes the destination register with the result generated by the first microinstruction if the architectural condition flags satisfy the condition, and writes the destination register with the current value of the destination register if the architectural condition flags do not satisfy the condition.




s

Recovering from an error in a fault tolerant computer system

A leading thread and a trailing thread are executed in parallel. Assuming that no transient fault occurs in each section, a system is speculatively executed in the section, with the leading thread and the trailing thread preferably being assigned to two different cores. At this time, the leading thread and the trailing thread are simultaneously executed, performing a buffering operation on a thread local area without performing a write operation on a shared memory. When the respective execution results of the two threads match each other, the content buffered to the thread local area is committed and written to the shared memory. When the respective execution results of the two threads do not match each other, the leading thread and the trailing thread are rolled back to a preceding commit point and re-executed.




s

Virtualization support for branch prediction logic enable / disable at hypervisor and guest operating system levels

A hypervisor and one or more guest operating systems resident in a data processing system and hosted by the hypervisor are configured to selectively enable or disable branch prediction logic through separate hypervisor-mode and guest-mode instructions. By doing so, different branch prediction strategies may be employed for different operating systems and user applications hosted thereby to provide finer grained optimization of the branch prediction logic for different operating scenarios.




s

Efficient parallel computation of dependency problems

A computing method includes accepting a definition of a computing task, which includes multiple Processing Elements (PEs) having execution dependencies. The computing task is compiled for concurrent execution on a multiprocessor device, by arranging the PEs in a series of two or more invocations of the multiprocessor device, including assigning the PEs to the invocations depending on the execution dependencies. The multiprocessor device is invoked to run software code that executes the series of the invocations, so as to produce a result of the computing task.




s

Multiprocessor system, multiprocessor control method, and multiprocessor integrated circuit

In a multiprocessor system, in general, a processor assigned with a larger amount of tasks is apt to perform a larger amount of communication with other processors assigned with tasks, than a processor assigned with a smaller amount of tasks. Thus in order for each processor to be able to perform the routing process efficiently, tasks are assigned such that, when there are a first processor and a second processor, the number of processors each assigned with one or more tasks and directly connected with the second processor being smaller than the number of processors each assigned with one or more tasks and directly connected with the first processor, the amount of tasks assigned to the first processor is equal to or larger than the amount of tasks assigned to the second processor.




s

Method for activating processor cores within a computer system

A method for activating processor cores within a computer system is disclosed. Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks.




s

Data accessing method for flash memory storage device having data perturbation module, and storage system and controller using the same

A data accessing method, and a storage system and a controller using the same are provided. The data accessing method is suitable for a flash memory storage system having a data perturbation module. The data accessing method includes receiving a read command from a host and obtaining a logical block to be read and a page to be read from the read command. The data accessing method also includes determining whether a physical block in a data area corresponding to the logical block to be read is a new block and transmitting a predetermined data to the host when the physical block corresponding to the logical block to be read is a new block. Thereby, the host is prevented from reading garbled code from the flash memory storage system having the data perturbation module.




s

High performance computing (HPC) node having a plurality of switch coupled processors

A High Performance Computing (HPC) node comprises a motherboard, a switch comprising eight or more ports integrated on the motherboard, and at least two processors operable to execute an HPC job, with each processor communicably coupled to the integrated switch and integrated on the motherboard.




s

Method and system for managing hardware resources to implement system functions using an adaptive computing architecture

An adaptable integrated circuit is disclosed having a plurality of heterogeneous computational elements coupled to an interconnection network. The interconnection network changes interconnections between the plurality of heterogeneous computational elements in response to configuration information. A first group of computational elements is allocated to form a first version of a functional unit to perform a first function by changing interconnections in the interconnection network between the first group of heterogeneous computational elements. A second group of computational elements is allocated to form a second version of a functional unit to perform the first function by changing interconnections in the interconnection network between the second group of heterogeneous computational elements. One or more of the first or second group of heterogeneous computational elements are reallocated to perform a second function by changing the interconnections between the one or more of the first or second group of heterogeneous computational elements.




s

Data processing method and apparatus for prefetching

A data processing device includes processing circuitry 20 for executing a first memory access instruction to a first address of a memory device 40 and a second memory access instruction to a second address of the memory device 40, the first address being different from the second address. The data processing device also includes prefetching circuitry 30 for prefetching data from the memory device 40 based on a stride length 70 and instruction analysis circuitry 50 for determining a difference between the first address and the second address. Stride refining circuitry 60 is also provided to refine the stride length based on factors of the stride length and factors of the difference calculated by the instruction analysis circuitry 50.




s

Shared load-store unit to monitor network activity and external memory transaction status for thread switching

An array of a plurality of processing elements (PEs) are in a data packet-switched network interconnecting the PEs and memory to enable any of the PEs to access the memory. The network connects the PEs and their local memories to a common controller. The common controller may include a shared load/store (SLS) unit and an array control unit. A shared read may be addressed to an external device via the common controller. The SLS unit can continue activity as if a normal shared read operation has taken place, except that the transactions that have been sent externally may take more cycles to complete than the local shared reads. Hence, a number of transaction-enabled flags may not have been deactivated even though there is no more bus activity. The SLS unit can use this state to indicate to the array control unit that a thread switch may now take place.




s

Hardware assist thread for increasing code parallelism

Mechanisms are provided for offloading a workload from a main thread to an assist thread. The mechanisms receive, in a fetch unit of a processor of the data processing system, a branch-to-assist-thread instruction of a main thread. The branch-to-assist-thread instruction informs hardware of the processor to look for an already spawned idle thread to be used as an assist thread. Hardware implemented pervasive thread control logic determines if one or more already spawned idle threads are available for use as an assist thread. The hardware implemented pervasive thread control logic selects an idle thread from the one or more already spawned idle threads if it is determined that one or more already spawned idle threads are available for use as an assist thread, to thereby provide the assist thread. In addition, the hardware implemented pervasive thread control logic offloads a portion of a workload of the main thread to the assist thread.




s

Multiprocessor messaging system

A multiprocessor system includes a first microprocessor and a second microprocessor. A first signaling pathway is configured to send message transmission coordination signals from the first microprocessor to the second microprocessor. The first signaling pathway may be coupled to at least two flag registers associated with the second microprocessor. A second signaling pathway is configured to send message transmission coordination signals from the second microprocessor to the first microprocessor. The second signaling pathway may be coupled to at least two flag registers associated with the first microprocessor. The first signaling pathway is independent of the second signaling pathway.




s

Data mover moving data to accelerator for processing and returning result data based on instruction received from a processor utilizing software and hardware interrupts

Efficient data processing apparatus and methods include hardware components which are pre-programmed by software. Each hardware component triggers the other to complete its tasks. After the final pre-programmed hardware task is complete, the hardware component issues a software interrupt.




s

System, method and computer program product for recursively executing a process control operation to use an ordered list of tags to initiate corresponding functional operations

In accordance with embodiments, there are provided mechanisms and methods for controlling a process using a process map. These mechanisms and methods for controlling a process using a process map can enable process operations to execute in order without necessarily having knowledge of one another. The ability to provide the process map can avoid a requirement that the operations themselves be programmed to follow a particular sequence, as can further improve the ease by which the sequence of operations may be changed.




s

System and method for communicating with sensors/loggers in integrated radio frequency identification (RFID) tags

A system and method is disclosed for communicating with sensors/loggers in integrated radio frequency identification (RFID) tags. An RFID reader uses a Communicate With Data Logger Command to communicate with a data logger in an RFID tag. The RFID reader performs data access processes using an Index Register and a Data Register of the RFID tag. The RFID reader selects one of (1) Index Read access (2) Index Write access (3) Data Write access (4) Data Read access with parity and (5) Data Read access with cyclic redundancy check (CRC). The RFID tag performs the requested data access and then performs an error detection process.




s

Reception according to a data transfer protocol of data directed to any of a plurality of destination entities

A data processing system arranged for receiving over a network, according to a data transfer protocol, data directed to any of a plurality of destination identities, the data processing system comprising: data storage for storing data received over the network; and a first processing arrangement for performing processing in accordance with the data transfer protocol on received data in the data storage, for making the received data available to respective destination identities; and a response former arranged for: receiving a message requesting a response indicating the availability of received data to each of a group of destination identities; and forming such a response; wherein the system is arranged to, in dependence on receiving the said message.




s

Accessing model specific registers (MSR) with different sets of distinct microinstructions for instructions of different instruction set architecture (ISA)

A microprocessor capable of running both x86 instruction set architecture (ISA) machine language programs and Advanced RISC Machines (ARM) ISA machine language programs. The microprocessor includes a mode indicator that indicates whether the microprocessor is currently fetching instructions of an x86 ISA or ARM ISA machine language program. The microprocessor also includes a plurality of model-specific registers (MSRs) that control aspects of the operation of the microprocessor. When the mode indicator indicates the microprocessor is currently fetching x86 ISA machine language program instructions, each of the plurality of MSRs is accessible via an x86 ISA RDMSR/WRMSR instruction that specifies an address of the MSR. When the mode indicator indicates the microprocessor is currently fetching ARM ISA machine language program instructions, each of the plurality of MSRs is accessible via an ARM ISA MRRC/MCRR instruction that specifies the address of the MSR.




s

Storing in other queue when reservation station instruction queue reserved for immediate source operand instruction execution unit is full

A processing apparatus includes an execution unit which performs computation on two operand inputs each being selectable between read data from a register and an immediate value. The processing apparatus also includes another execution unit which performs computation on two operand inputs, one of which is selectable between read data from a register and an immediate value, and the other of which is an immediate value. A control unit determines, based on a received instruction specifying a computation on two operands, whether each of the two operands specifies read data from a register or an immediate value. Depending on the determination result, the control unit causes one of the execution units to execute the computation specified by the received instruction.




s

Load/move and duplicate instructions for a processor

A method includes, in a processor, loading/moving a first portion of bits of a source into a first portion of a destination register and duplicate that first portion of bits in a subsequent portion of the destination register.




s

Generating hardware events via the instruction stream for microprocessor verification

A processor receives an instruction operation (OP) code from a verification system. The instruction OP code includes instruction bits and forced event bits. The processor identifies a forced event based upon the forced event bits, which is unrelated to an instruction that corresponds to the instruction bits. In turn, the processor executes the forced event.




s

Dynamic energy savings for digital signal processor modules using plural energy savings states

In an example embodiment, there is described herein an apparatus comprising an interface for communicating with a plurality of digital signal processors and logic operable to send and receive data via the interface. The logic is configured to determine a first set of digital signal processors to be maintained in a ready state, a second set of digital signal processors to be maintained in a first energy saving state, and a third set of digital signal processors to be maintained in a second energy saving state.




s

Automatic WSDL download of client emulation for a testing tool

A method is disclosed which may include analyzing communication requests in a business process between a client and a server offering a service application to be tested. The method may further include identifying a call to a web service in the analyzed communication. The method may also include determining a location of a Web Service Description Language (WSDL) file relating to the web service on a remote server and downloading the WSDL file from the determined location. A computer readable medium having stored thereon instructions for performing the method and a computer system are also disclosed.




s

Framework for facilitating implementation of multi-tenant SaaS architecture

A framework for implementing multitenant architecture is provided. The framework comprises a framework services module which is configured to provide framework services that facilitate abstraction of Software-as-a-Service (SaaS) services and crosscutting services for a Greenfield application and a non SaaS based web application. Further the abstraction results in a SaaS based multitenant web application. The framework further comprises a runtime module configured to automatically integrate and consume the framework services and APIs to facilitate monitoring and controlling of features associated with the SaaS based multitenant web application. The framework further comprises a metadata services module configured to provide a plurality of metadata services to facilitate abstraction of storage structure of metadata associated with the framework and act as APIs for managing the metadata. The framework further comprises a role based administration module that facilitates management of the metadata through a tenant administrator and a product administrator.




s

Enhanced instruction scheduling during compilation of high level source code for improved executable code

Systems and methods for static code scheduling are disclosed. A method can include receiving an intermediate representation of source code, building a directed acyclic graph (DAG) for the intermediate representation, and creating chains of dependent instructions from the DAG for cluster formation. The chains are merged into clusters and each node in the DAG is marked with an identifier of a cluster it is part of to generate a marked instruction DAG. Instruction DAG scheduling is then performed using information about the clusters to generate an ordered intermediate representation of the source code.




s

Converting existing artifacts to new artifacts

Systems, Apparatus, methods, and computer program products are provided for converting an existing artifact to one or more new artifacts. For example, in one embodiment, a computing device can receive input identifying an existing artifact for conversion to one or more new artifacts. One or more items from the existing artifact and their respective types can be identified for conversion. Then, the one or more items of the existing artifact can be converted to one or more new artifacts.




s

Systems and methods for monitoring product development

A computer-implemented method is provided for evaluating team performance in a product development environment. The method includes receiving a plurality of points of effort made by a team over a plurality of days in a time period, computing a slope associated with a line of best fit through the plurality of points of effort over the plurality of days, computing a deviation of the slope from an ideal slope corresponding to a desired performance rate for the team, and generating a display illustrating at least one of the slope, the ideal slope or the deviation.




s

Conducting verification in event processing applications using formal methods

A method of applying formal verification methodologies to event processing applications is provided herein. The method includes the following stages: representing an event processing application as an event processing network, being a graph with event processing agents as nodes; generating a finite state machine based on the event processing network, wherein the finite state machine is an over-approximation of the event processing application; expressing stateful rules and policies that are associated with the event processing application using temporal logic, to yield a temporal representation of the event processing application; combining the temporal representation and the finite state machine into a model; generating a statement associated with a user-selected verification-related property of the event processing application, wherein the statement is generated using the temporal representation; and applying the statement to the model, to yield an indication for: (i) a correctness of the statement or (ii) a counter example, respectively.




s

Applying coding standards in graphical programming environments

Graphical programming or modeling environments in which a coding standard can be applied to graphical programs or models are disclosed. The present invention provides mechanisms for applying the coding standard to graphical programs/models in the graphical programming/modeling environments. The mechanisms may detect violations of the coding standard in the graphical model and report such violations to the users. The mechanisms may automatically correct the graphical model to remove the violations from the graphical model. The mechanisms may also automatically avoid the violations in the simulation and/or code generation of the graphical model.




s

Unified and extensible asynchronous and synchronous cancelation

A cancelation registry provides a cancelation interface whose implementation registers cancelable items such as synchronous operations, asynchronous operations, type instances, and transactions. Items may be implicitly or explicitly registered with the cancelation registry. A consistent cancelation interface unifies cancelation management for heterogeneous items, and allows cancelation of a group of items with a single invocation of a cancel-registered-items procedure.




s

Automated generation of two-tier mobile applications

The disclosure generally describes computer-implemented methods, software, and systems for creating and using two-tier mobile applications. A computer-implemented method includes identifying at least a portion of a database to be associated with a mobile application, retrieving a set of metadata associated with the at least a portion of the identified database, automatically generating a set of mobile application source code for directly accessing the at least a portion of the database based on the set of retrieved metadata, and compiling the set of mobile application source code into a distributable mobile application, the distributable mobile application configured to directly access the identified database associated with the mobile application. In some instances, the identifying, retrieving, generating, and compiling operations are performed at design time, while at runtime, the mobile application is executable by a mobile device and, during runtime execution, can request database-related information directly from the identified database.




s

Methods and devices for managing a cloud computing environment

Methods, devices, and systems for management of a cloud computing environment for use by a software application. The cloud computing environment may be an N-tier environment. Multiple cloud providers may be used to provide the cloud computing environment.




s

System for selecting software components based on a degree of coherence

Disclosed is a novel system and method to select software components. A set of available software components are accessed. Next, one or more dimensions are defined. Each dimension is an attribute to the set of available software components. A set of coherence distances between each pair of the available software components in the set of available software components is calculated for each of the dimensions that have been defined. Each of the coherence distances are combined between each pair of the available software components that has been calculated in the set of the coherence distances into an overall coherence degree for each of the available software components. Using the overall coherence degree, one or more software components are selected to be included in a software bundle.




s

System and method for recommending software artifacts

A method for recommending at least one artifact to an artifact user is described. The method includes obtaining user characteristic information reflecting preferences, particular to the artifact user, as to a desired artifact. The method also includes obtaining first metadata about each of one or more candidate artifacts, and scoring, as one or more scored artifacts, each of the one or more candidate artifacts by evaluating one or more criteria, not particular to the artifact user, applied to the first metadata. The method further includes scaling, as one or more scaled artifacts, a score of each of the one or more scored artifacts, by evaluating the suitability of each of the one or more scored artifacts in view of the user characteristic information. The method lastly includes recommending to the artifact user at least one artifact from among the one or more scaled artifacts based on its scaled score.