Virus Recordings

Wallpaper Other

Source i (link to git-repo or to original if based on someone elses unmodified work):

Add the source-code for this project on opencode.net

0
Become a Fan
5.0

Description:
This is my minimal take off of the Virus Recordings logo and text style. For those of you who don't know who the virus krew (Ed Rush & Optical, Rhymetyme)... better get a move on!

Cheers Linux Headz,
Lemme know what you think.
Ben.

Ratings & Comments

22 Comments

novomente

Nice prototype. The interactivity makes the ideas included more clear. As playground it is perfect. With HTML5, CSS3 and JavaScript there is a lot of things possible. The UI may look any way. It reminds me a time where applications had their own original look and usability. But at those times it was a problem, because users must learn every application to use. The problem was solved over years with GUI toolkit (GUI components etc.). Such toolkit was library programmed for each operating system or each Desktop Environment. It was also a solution to low memory of a desktop computer, where application shared the GUI toolkit in order to save memory footprint. With HTML5 etc., there is similar problem to per application original look and usage. One can say it can be solved the similar way with programming a HTML, Javascript toolkit. Yes it is possible. But in the end many applications could be unsatisfied with such toolkit and their developers would choose to create their own toolkit. So I think that instead of making a toolkit or HTML/JavaScript API, there must be some guideline (concept) desription which most of application GUIs or most of GUI libraries or most HTML/JavaScript toolkits have to follow. The guidelines only says how the desktop would look and function. Such thing is already done with Human Interface Guidelines ( http://en.wikipedia.org/wiki/Human_interface_guidelines ) which should the DE guideline stand on and freedesktop.org ( http://en.wikipedia.org/wiki/Freedesktop.org ) which is a project to make X-Window toolkits offer the same usage from a user's experience (such that applications using KDE toolkit would be very similar in usage as applications in GNOME and vice versa). The HTML/JavaScript guideline should play the same role as freedesktop.org. The guideline must be not too complex in order to allow a wide range of applications to follow it. On the other hand there could be some more guidelines (concepts) for example a General User UI Concept, Administration Concept (concept for system administrators - terminal etc.), Graphics and multimedia concepts (like DTP, 3D creations, movie creations etc.), Server System Concept, and so on. Maybe we should make difference between a "guideline" term and a "concept" term. The difference is that "guideline(s) is enough general to make wide variety of GUIs following it. The "concept" is more specific and describes for example a HTML/JavaScript "components", desktop, icons, functionality. The concept is only a document describing functionality and look of a toolkit, but it is not programmed toolkit. To explain it exactly lets say the GNOME is a coded toolkit. The GNOME concept is only a document describing the GNOME toolkit. With such description the GNOME developers exactly know what is a Gnome-Shell and what should it do and then they develop the Shell in C++. So we can make a guideline and then a FluiDE-HTML/JavaScript concept (what is a document describing the FluiDE desktop) and developers can then code their own FluiDE toolkit exactly build for their single application. Then they need not to share any of HTML/JavaScript code among applications from different developer groups and all applications will look and function very similar. Of course as well as web application frameworks are created (Joomla, Drupal etc.: http://en.wikipedia.org/wiki/Comparison_of_web_application_frameworks ) there could also be finished coded FluiDE toolkit or many other toolkits and frameworks shared among applications from wide range of developers. The primary goal of a guideline is making usage over wide range of applications very similar (preventing a user to learn how to use every application) etc. The concept is a specific (but still anough general and open) to create UI (to say it exactly the GNOME concept exactly describes the GNOME DE and developers can create their own toolkits and frameworks which would look and behave exactly as a GNOME DE - so there could be per application specific GNOME DEs with or without sharing the framework). Our task could be only to create very smart concept(s) which they really worth to follow. Plus code a FluiDE DE - as a real working example of what the concept is capable of. And if the FluiDE code will be perfectly written, it could be shared and meet with a success. These are my thoughts today.

MasKalamDug

I just discovered UX. It stands for User Experience. I don't know how much of it there is out there. I have just been reading the website uxnewsfeed.com It's about how companies can interact better with their customers. It would be silly for us to dismiss business experience when we can get it. The problems a business has dealing with its customers are no different than those an OS has dealing with its users. I've been spending a lot of time on interactive fiction (especially the Inform language) and learned a lot about the problems as they see it. I am trying to figure out what I have learned might apply to GUI design. My initial reaction is not much because the fiction interaction is so much more complex than current GUI interaction. But maybe we would be well advised to think for future situations where complex interaction takes place.

novomente

Is it possible to merge Martin Gimpl's Stripes with Windows 8 Metro apps? I made an image to show the merge. I think it breaks the beauty of Stripes Philosophy, but it is only for our thoughts about new things. http://novomente-activities.blogspot.com/2012/02/stripes-windows-metro.html BTW - I must note something to David - Although I can release my imagination to fly long time ahead and above, or I have only some crazy ideas or thoughts, when I must make some real concepts I'm able to think with feet on the ground. And when making final decisions I'm able to strike all thinking and ideas with thin line and decide for older proved but better solutions :)

Fri13

1) User Interface (UI) is not just software, that groups belongs all hardware as well, like keyboard, mouse and display. Every human interaction with the device is part of the user interface. Lights, buttons, switches, keys, wheels, resolution, colors, shapes, materials... The Graphical or Text oriented user interface generated with software is half of the UI while the hardware is other half. 2) Operating System does not have a user interface. It is other softwares job to generate the graphical or textual user interface. Operating Systems task is to operate the resources so all other software can work and lease those resoures.

novomente

Quote:

With all this talk about a new menu concept, I started to wonder how relevant "applications" are anymore. I alluded to this previously (where I suggested that files were the focus rather than the app), but what if apps simply "extensions" of the OS, all accessed from a single UI? For example you would type or say "email joe" and it would then customize the UI to better fit the task, but remain essentially unchanged. So basically, I'm saying there should be little or no difference between lauching a program and using it. An application would then be nothing more that a collection of capabilities for a library, with some instructions on how the OS should present those ablilties. The OS would do everything else.
Exactly. I had this idea too. I imagined every application be a plugin into the OS with single (or multiple types) of user control. The whole OS will then be nothing more than a single application extensible by plugins (the app plugins - libraries etc.). I was also thinking of technical issue little bit and talk about some technical things of this idea when we were talking about today application plugins (in previous comments some time ago). But it has one problem. It is the OS itself. It would be a lot of work to do and maybe even to redesign the OS. Well thats a deal like a whole computer world. :) - redesign Linux from the base :D OK nice ideas. But we are still only 3 here most active in our group. Although this group can start a UI interface revolution from XEROX era still we must concentrate on reachable goals (as David said). To define some nearby task of our work would require to balance between the imagination and thinking with feet on the ground. Both points of view are correct (I think) at least few days or weeks (maybe months) before thinking in real tasks. But it all is so exciting that I have the same feeling: start some pre-work, think on technical issues, determine some final goals. Make some thing which will do something we are talking about here. OK but without a hurry I will speak for a few days about the whole pack of ideas I got in my mind 2 years ago. Surely you will have another ideas and thoughts from many points of view. When we discuss the ideas coming up in our minds next days we can realise that we can lay down some real reachable goals and maybe a good start of some tries and real work. It seems that it depends on us whether we provide something what will attract another guys to join this group (at least).

randallovelace

I watched a video of the new interface for use with the newer Unity for Ubuntu 12.04LTS - thought it looked interesting, though I think it assumes that you know every menu option for every program you run.

MasKalamDug

I fear this discussion has come to an end. In my case I was thrown off course by the realization that plug-ins are just another way of looking at object inheritance. And that insight pushed me back into object theory and away from GUI's. I am still deep into considering what objects really signify and all I can contribute now is the idea that the GUI is the proper place to register plug-ins. What exactly a plug-in is seems a bit mysterious. I am inclined to identify it with the interface of additional methods added when one object type is derived from another. The GUI of the older application object will work perfectly with the newer application object - providing I didn't override any of the older methods (or, if I did, harmlessly). But the older GUI cannot access the new interface. So a GUI for the newer object must integrate the new interface. In that case it should also handle "registering" the plug-in with the code. Taking the idea a bit further I can look at the GUI itself as a kind of plug-in to the code object. But it really is more like a wrapper around the code object. Maybe the best approach would be two objects - code and GUI - acting as as a team. But these are mere speculations I have no idea where they will go.

randallovelace

What if instead of the whole z stacking and tiling, we think a little different - if creating in HTML5, could we not do a truly 3D desktop - where windows 'float' in the x/y/z environment? - Also, would multi-touch be useful with that as top left/bottom right could be grabbed on a window and it dragged to 'full screen' and again to make it small and then single touch to move it around? Thinking if this is going to be 'new', why not make it truly new?

MasKalamDug

I formulated an application model where there was a front end, a back end and virtual shared storage. I use gedit as a test bed but first lets start with the simplest possible text handling - a file browser. In a file browser the front end simply displays the contents of the file - nothing more. I would say the virtual inner interface was, at a minimum, a struct like this: Void* file_address Int file_length Char character Bool step_right Bool step_left Int error The two ends exchange messages saying "x = y" where x belongs to the interface and y is a new value to assign to x. The front end starts by running some file-finder utility to find a file to display, then loading that file into data memory and finally sending two messages: front: file_address = ... front: file_length = ... Then it starts to display the text. It gets the successive characters as follows: front: step_right = 0 back: character = '...' When the file is exhausted and there is still one more step_right back: error = ... After that every can be done by the front end. But that requires the front end to maintain a duplicate copy of the text. We can do a lot better by adding pair of data to the inner interface Bool get_cursor Cursor cursor Cursor is an opaque type the front end does not understand. But the front end can save a cursor value and compare two cursor values. The get_cursor variable is really a command - as are step_right and step_left - that asks the back end to supply the value of cursor. Then the front end can, for example, remember where the cursor was when each page began and if it wants to, say, go back one page it sends: front: cursor = ... // the value is saved cursor value In order to get to an editor we need to bring in an edit add-in. I am cheating a little here because I am really assuming the browser is just the editor with edit turned off. A simple edit capacity can run with four more command in the interface: Bool insert_left Bool insert_right Bool remove_left Bool remove_right The remove commands return the character removed and the insert command inserts the character. The way add-ins work is that the messages are realized as function calls to one of two functions - front or back. The arguments are an identifier for what variable and the value to use. The back end starts by a select on the variable. If it can't find the variable it passes the problem to the first add-in. Here I am designing the back end. This is certainly suggestive that the front end needs its own add-ins. This is far from finished. Another installment another day.

user333

It seems that we have never actually decided on our main goals, so now is a great time to decide ;) We need to come to an agreement before we go much farther, so please comment! What is our overall goal? To become a popular DE, or be an example of an ideal interface? Or both? I doubt that unless the project becomes as main-stream as Gnome or KDE, no application developer will change a program to be compatible with us. However, if we have ideal goals in place, but delay them so existing programs will still work, then later on perhaps we would have enough influence to cause application developers to use our methods. Or, possibly an existing DE would like our project and use some of it's ideas. I (I'm speaking for myself here) think we should limit ourselves to only creating a "shell" but not our own development libraries, similar in some ways to Unity. While we could try to make an entirely different system from the start, people would not accept the changes, so we introduce them in stages. We can use some "hacks" such as a global menu that uses our design, but still use the standard GTK and QT. Then, if we get a good amount of acceptance, we can try introducing better ways of writing programs.

user333

I guess this would be a very important thing to establish early on in designing. I'm leaning more towards a power user or an office worker who needs to get things done fast without being frustrated by the interface.

user333

I'm really sorry! Whenever I post a message to my group it keeps going to the latest artwork!!! I wish someone would fix this problem

MasKalamDug

Where I now stand is as follows: (1) There is a physical display containing a rectangular array of physical pixels (colored dots) (2) There is a parallel computer internal array of 32 bit software pixels that is connected to the physical array. I call this software array the display. (3) There is a parallel array of message target addresses - one corresponding to each pixel. (4) Input events happen at pixels and when they do event messages are sent to the corresponding message target (5) There is an input event handler which associates each input event with some pixel (and therefore with the message target corresponding to the pixel). And that, fundamentally is your GUI. The first thing to elaborate on is the invent handler. Assume a mouse. The mouse is always associated with some pixel (or rather to the message target corresponding to some pixel). The mouse focus is usually that message target but can be grabbed. All the keyboard and mouse button events that are not grabbed by the event handler for some higher purpose go to the focus. The mouse moves. I assume that generally the pixel it moves to has the same message target as the previous pixel. In the simpler situation no event message will be sent. The event handler will move the sprite image. If the pixels do not have the same message target the handler sends a mouse-leaving message to the old message target if it has been entered then it (re)starts a count down. If the count down ever goes to zero it sends a mouse-entering message to the new message target. The actual pixel position is always globally accessible. It is possible for applications to ask for mouse-tracking which means that an event is sent every time the mouse moves. Applications can change the image the event handler uses for its sprite and tool-tips are realized as sprite image changes. There must be at least one running object so that there is always some non-trivial target value in the message target array. This object would be the operating system or the shell. Each application uses the same general model as the GUI recursively but there is only one global GUI that all applications share. Summary: There are global resources - the pixel and target arrays and an event-handler process (which, of course, would need to be written). Question: What else do I need to add?

MasKalamDug

Before we get involved in anything we need see what happens with Windows 8. In case you have missed the fuss - Windows 8 comes with two GUI's. One is the old-fashioned kind (I have forgotten what they call it) and the other, called Metro, is like touch screen interfaces. The word Metro is convenient and I will keep using it until somebody objects. I was preparing a discussion of touch screen implications when Microsoft announced. Now I propose to wait awhile and see what I can learn from Windows 8 users. I have seen a real life Windows 8 Metro screen (in a store where I could not play with it) and, like a lot of people, I have reservations about it. But, at least, we might learn what not to do.

opuntia

After seeing the prototype, it kind of reminds me of some proto web page instead of a traditional desktop environment.

user333

I came up with this idea on how panels could work: http://opendesktop.org/content/show.php?content=145206 Let me know what you think! I tried to post this before but it didn't show up, so I'm sorry if I double posted.

user333

I'm really sorry!! It seems that whenever I post a comment on my group it puts it here! There must be a bug somewhere.

user333

I came up with this idea on how panels could work: http://opendesktop.org/content/show.php?content=145206 Let me know what you think! I tried to post this before but it didn't show up, so I'm sorry if I double posted.

user333

I came up with an idea of how panels could work: http://opendesktop.org/content/show.php?content=145206 What do you think of it?

MasKalamDug

Consider a tablet. It has neither a keyboard nor a mouse - just fingers. I think we need to include tablets in our GUI. This means a change in the idea of focus. It is not just where the mouse sprite is pointing. Now focus can be completely missing - the inert tablet - then it appears - a finger - then it disappears. And there are multiple foci - the iPad has two finger actions. If we get extreme we can visualize the user using both hands and all ten fingers. The old joke now adds their nose. I think the name focus no longer applies. I am going to speak of TOUCH in place of FOCUS There is a rule of thumb in these matters - if you do something for two you might as well do it for an arbitrary number. So we have an arbitrary number of touches that appear and disappear. If we think of a touch as an object it seems like it is constantly being created and destroyed. However it seems to clear there is a relatively small number of touches active at any moment so we can use, say, sixteen touches and enable and disable them. From a usage point-of-view we muct keep in mind that there is no long term continuity (unless, as i will suggest in a moment, we declare one). We could allocate the touches on the stack like GTK iters but that seems to add complexity without any benefit. Now consider the case where a tablet user is required to enter text (I can see no future in which this is not a requirement). The tablet GUI should present a virtual keyboard that the user types into by touching. But that, conversely, tells us how to treat a real keyboard - it replaces a virtual keyboard. I assume people who work with text will want a keyboard so we need to make a physical keyboard a "plug-in" peripheral that replaces the virtual one. I am less sanguine about the survival of the mouse. It is easy to visualize a future without mice. But today it would be premature to ignore the mouse. We have to shoe horn it into the GUI somehow. There are two aspects to a mouse - the sprite on the screen and the buttons on the mouse. Ignore scroll wheels for the moment. The buttons are equivalent to touches at the pont where the sprite is. But that implies the possibility of multiple different touches at the same point with the same finger (thinking of the sprite as a finger). I guess we can live with that. We should dedicate certain touches - say the first threee - to mouse button pseudo-touches. Backtracking a moment I ask - are fingers capable of more than one kind of touch? Of course they are - the real question is - is the touch screen hardware capable of distinguishing between different touches? Some touch screen hardware seems to be and some seems not to be. The technology remains immature and we do not have any insights as to where it will go. I don't like doing it but it seems necessary to assume each touch comes with an intensity parameter. Obviously there are only going to be a few different values for intensity and perhaps only two. At the level of design I am discussing here we should avoid making anything fundamental depend on the intensity. At first glance it seems a mouse button has only one intensity. But I can think of several ways an intensity might be added if such a thing where wanted. We might even do such an outre thing as making a double click indicate a harder touch. The sprite remains. It moves around the GUI screen. From a touch point of view it only appears when I hit a mouse button and then it, of course, determines where the mouse button touch occurs. But in present day GUIs it also generates entering and leaving events that control tooltips (and possibly other things). How this functionality is to be managed is somewhat of a design puzzle. One approach might be to consider entering a window to be a very light touch. Since I don't want tool tips coming up instantly I think it might be better to delay the entering message (or is it an event?). That is, suppose I have a constant tick going to the sprite manager every, say, tenth of a second. The sprite enters a new window the manager puts 10 in some buffer and counts down and if it ever reaches zero it issues the entering event. Then, perhaps, it puts 100 in the buffer and counts down to a leaving event. Or maybe I leave tooltips showing forever. Obviously either way is easy. Now this very light touch I mentioned is probably best implemented when it is a sprite pseudo-touch as another dedicated touch rather than the intensity. of some other touch. Note the possibility of a finger light touch. That is, assuming the hardware is up to the task, a light touch on a window brings up a tooltip. We should not invest much effort in this possibility because it may not be physically possible. The scroll wheel is a special minor peripheral. I suppose we must be able to handle it somehow. I imagine It can be set t focused on some window and, as the wheel is spun, it sends occasional "step up" and "step down" messages to the window in question. I see this as another "plug-in" peripheral and not an integral part of the system. The bottom line is that widgets get event messages containing two pieces of information - touch identity and the parameter I called intensity. Alternating message from a particular touch toggle it on and off.

MasKalamDug

I am struck by how easy it would be to describe a GUI in triples. Triples are one way of looking at a semantic net - and there are many different nomenclatures. For example, in GUI description I might have the triples - Menu Bar | is packed | horizontally File | is in | Menu Bar Edit | is in | Menu Bar If I were to assume this means Edit is after File I would be requiring the triples to appear in a certain order. So it would be better to change the last triple to read Edit | is after | File Another thing I want to do is attach a widget (all I mean by widget is a software object able to accept a mouse message) to these screen boxes. So I need triples like File | exposes | File Menu Widget I know Mozilla used some of this RDF technology but I don't whether it is used in this part of their setup. It would seem to me to make a dynamic GUI especially easy to use.

anonymous-hive

i like it, it's clean and cool ... i'm lucky that you integrated a kde logo (almost black pixels on black background, 3x5pixel font. genius!). so that you don't have to remove your background *phew* cu r3tro

Pling
0 Affiliates
Details
license
version
updated
added
downloads 24h 0
mediaviews 24h 0
pageviews 24h 0

More Wallpaper Other from bkeating:

Passions WP x003c 1600x1200
bkeating
last update date: 23 years ago

Score 5.0

Passions WP x002c 1600x1200
bkeating
last update date: 23 years ago

Score 5.0

Passions WP x002b
bkeating
last update date: 23 years ago

Score 5.0

Passions WP x002
bkeating
last update date: 23 years ago

Score 5.0

Passions WP x001
bkeating
last update date: 23 years ago

Score 5.0

Other Wallpaper Other:

Konqueror and Licq Screenshot
protoman
last update date: 23 years ago

Score 5.0

Brazil: Confederations Cup Champion
protoman
last update date: 20 years ago

Score 5.3

Kurumim.gov.br
protoman
last update date: 21 years ago

Score 5.0

chopper1027.jpg
gravis
last update date: 23 years ago

Score 5.0

Ayo 73 epreuve rentré (snoopy)
gravis
last update date: 23 years ago

Score 5.0

sarah
ramper02
last update date: 23 years ago

Score 5.7