Processing Input Events

Now that ShadowVolume’s rendering pipeline has been modified to render Flash with Scaleform, we now want to interact with the playing Flash. For example, moving the mouse over a button should cause it to highlight and typing into a text box should cause new characters to appear.

The GFx::Movie::HandleEvent passes a GFx::Event object representing the type of event and other information such as the key pressed or mouse coordinates. The application simply constructs an event based on input and passes it to the appropriate GFx::Movie.

Mouse Events

ShadowVolume receives Win32 input events in the MsgProc callback. A call is added to GFxTutorial::ProcessEvent to run code to enable Scaleform to process the events. The code below processes WM_MOUSEMOVE, WM_LBUTTONDOWN, and WM_LBUTTONUP:

    void ProcessEvent(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam, 
                  bool *pbNoFurtherProcessing)
    {
        int mx = LOWORD(lParam), my = HIWORD(lParam);
        if (pUIMovie)
        {
            if (uMsg == WM_MOUSEMOVE)
            {
                MouseEvent mevent(GFx::Event::MouseMove, 0, mx, my);
                pUIMovie->HandleEvent(mevent);
            }
            else if (pMovieButton && uMsg == WM_LBUTTONDOWN)
            {
                ::SetCapture(hWnd);
                MouseEvent mevent(GFx::Event::MouseDown, 0, mx, my);
                pUIMovie->HandleEvent(mevent);
            }
            else if (pMovieButton && uMsg == WM_LBUTTONUP)
            {
                ::ReleaseCapture();
                MouseEvent mevent(GFx::Event::MouseUp, 0, mx, my);
                pUIMovie->HandleEvent(mevent);
            }
        }
    }

Scaleform expects mouse coordinates to be relative to the upper left corner of the specified viewport, not the native resolution of the movie. The below examples clarify this:

Example #1: Viewport matches screen dimensions

pMovie->SetViewport(screen_width, screen_height, 0, 0, screen_width, screen_height, 0);

No transformation is necessary in this case: the mouse coordinates from Windows are already relative to the upper left corner of the movie since the movie is positioned at (0, 0). The coordinates are scaled internally by Scaleform from the viewport dimensions to the native movie resolution for internal processing.

Example #2: Viewport smaller than screen, but viewport is positioned in the upper left corner of the screen pMovie->SetViewport(screen_width, screen_height, 0, 0, screen_width / 4, screen_height / 4, 0);

Once again, no transformation is necessary in this case. The size and position of the buttons changes because the viewport has been scaled down. However, both coordinates used by HandleEvent and the Windows screen coordinates are still relative to the upper left corner of the window and no translation is necessary. Scaling of the coordinates from the viewport dimensions to the native movie resolution is handled internally by Scaleform.

Example #3: Viewport smaller than screen and centered

movie_width  = screen_width / 6;
movie_height = screen_height / 6;
pMovie->SetViewport(screen_width, screen_height, screen_width / 2 – movie_width / 2, 
                    screen_height / 2 – movie_height / 2, movie_width, movie_height);

Translation of the Windows screen coordinates is necessary in this case. The movie is no longer positioned at (0, 0) so its new position at (screen_width / 2 – movie_width / 2, screen_height / 2 – movie_height / 2) must be subtracted from the screen coordinates passed in by Windows.

Note that if the Flash content is centered or in some other way aligned by GFx::Movie::SetViewAlignment these transformations do not have to be performed. As long as the mouse coordinates are relative to the coordinates given to GFx::Movie::SetViewport, alignment and scaling performed by SetViewAlignment and SetViewScaleMode will be handled internally by Scaleform.

Keyboard Events

Keyboard events are also handled through GFx::Movie:HandleEvent. There are two kinds of key events: GFx::KeyEvent and GFx::CharEvent:

KeyEvent(EventType eventType = None, Key::Code code = Key::None, 
         UByte asciiCode = 0, UInt32 wcharCode = 0,
         UInt8 keyboardIndex = 0)

CharEvent(UInt32 wcharCode, UInt8 keyboardIndex = 0)

A GFx::KeyEvent is similar to a raw scan code; a GFx::CharEvent is similar to a processed ASCII character. In Windows, a GFx::CharEvent would be generated in response to the WM_CHAR message; GFx::KeyEvents are generated in response to WM_SYSKEYDOWN, WM_SYSKEYUP, WM_KEYDOWN, and WM_KEYUP messages. Some examples:

Separate GFx::KeyEvent events are sent for key down and key up events. To enable platform independence, the key code is defined in GFx_Event.h to match the key codes used internally by Flash. The GFxPlayerTiny.cpp example and Scaleform Player program both contain code to convert Windows scan codes to the corresponding Flash codes. The final code for this section includes the ProcessKeyEvent function that can be reused when integrating with a custom 3D engine:

void ProcessKeyEvent(Movie *pMovie, unsigned uMsg, WPARAM wParam, LPARAM lParam)

Simply call the function from the Windows WndProc function in response to WM_CHAR, WM_SYSKEYDOWN, WM_SYSKEYUP, WM_KEYDOWN, and WM_KEYUP messages. The appropriate Scaleform events will be generated and sent to pMovie.

Sending both GFx::KeyEvent and GFx::CharEvent is important. For example, most text boxes respond only to GFx::CharEvent because they are interested in printable ASCII characters. Also, a text box should be able to accept Unicode characters (e.g., from a Chinese Input Method Editor [IME]). In the case of IME input, raw key codes are not useful and only the final character event (which typically results from several keystrokes processed by the IME) is of interest to the text box. In contrast, a list box control would need to intercept the Page Up and Page Down keys through GFx::KeyEvent because these keys do not correspond to printable characters.

The function is included with the final code for this section in Tutorial\Section4.3. Run the program and move the mouse over buttons. Buttons will highlight correctly, and those that do not require integration with the 3D engine will work properly. Pressing “Settings” will transition to the DX9 configuration screen without any C++ code because d3d9guideAS3.fla implements this simple logic using ActionScript in HUDMgr.as.

To see the keyboard processing code in action click “Change Mesh” and type into the text input box. There are some minor issues which will be fixed later on in the tutorial. Notice the animation that occurs when the “Change Mesh” button is pressed to open a text input box. This animation is easy to do in Flash with vector graphics, but impractical with a traditional bitmap-based interface. The animation would require custom code in addition to additional bitmaps, making the animation potentially slow to load, costly to render, and most importantly tedious to code.

Hit Testing

Run the application and move the mouse while holding down the left mouse button to change the direction the camera is pointing in the 3D world. Now move the mouse over one of the UI elements and do the same. Although the mouse click does generate the desired response in the interface, it still causes the camera to move.

Focus control between the UI and the 3D world is a problem that can be addressed through GFx::Movie::HitTest. This function determines whether viewport coordinates hit an element rendered in the Flash content. Modify GFxTutorial::ProcessEvent to call HitTest after processing a mouse event. If the event occurred over a UI element, it signals the DXUT framework not to pass the event to the camera for processing:

    bool processedMouseEvent = false;
    if (uMsg == WM_MOUSEMOVE)
    {
        MouseEvent mevent(GFx::Event::MouseMove, 0, (float)mx, (float)my);
        pUIMovie->HandleEvent(mevent);
        processedMouseEvent = true;
    }
    else if (uMsg == WM_LBUTTONDOWN)
    {
        ::SetCapture(hWnd);
        MouseEvent mevent(GFx::Event::MouseDown, 0, (float)mx, (float)my);
        pUIMovie->HandleEvent(mevent);
        processedMouseEvent = true;
    }
    else if (uMsg == WM_LBUTTONUP)
    {
        ::ReleaseCapture();
        MouseEvent mevent(GFx::Event::MouseUp, 0, (float)mx, (float)my);
        pUIMovie->HandleEvent(mevent);
        processedMouseEvent = true;
    }

    if (processedMouseEvent && pUIMovie->HitTest((float)mx, (float)my, 
                                                   Movie::HitTest_Shapes))
        *pbNoFurtherProcessing = true;

Keyboard Focus

Run the application and click the “Change Mesh” button to open a text input box. Type into the box and keyboard input will work because of the code added in section 4.3.2. However, entering the W, S, A, D, Q, and E keys will enter text but also move the camera in the 3D world. The keyboard event is processed by Scaleform, but also passed to the 3D camera.

Resolving this issue requires determining whether the text input box has focus. Section 6.1.2 will describe how Flash ActionScript can be used to send events to C++ enabling our event handler to track focus.

Touch Events

Touch events are like mouse events, but sent only when the user is touching the screen. To make use of touch events, we can route raw touch data from Windows API directly into Scaleform. The minimum required platform is Windows 7 running on a touch-capable system, without which you will be unable to follow this section of the tutorial and should skip ahead to the next section.

Before compiling, add WINVER=0x601 to the project’s Preprocessor Definitions. This tells the Windows API to compile functionality specific for Windows 7 (i.e. multitouch). In ShadowVolume.cpp, uncomment the macro #define ENABLE_MULTITOUCH, which guards unsupported code from compiling.

#include <windows.h> has been added in order to use the special multitouch libraries. During the window’s initialization we register our intent to listen for touch events via RegisterTouchWindow, which enables reception of WM_TOUCH messages in the MsgProc callback for processing.

if(uMsg == WM_TOUCH)
     ProcessTouchEvent(pUIMovie, uMsg, wParam, lParam);

ProcessTouchEvent is our function responsible for parsing and relaying the WM_TOUCH message. Windows functions are used to convert the touch coordinates into pixel coordinates. Afterwards, a GFx::TouchEvent is created and sent to the movie via HandleEvent. The GFx::TouchEvent class is composed of a unique ID (managed by the OS), x/y coordinates, contact area, pressure (from 0..1), and whether or not it’s a primary point.

TouchEvent( EventType evtType, unsigned id, float _x, 
            float _y, float wcontact = 0, float hcontact = 0, 
            bool primary = true, float pressure = 1.0f)

The SWF movie is now receiving touch events, but GFx by default supports 0 max touch points. The last step is to define an inherited MultitouchInterface that describes our multitouch environment, specifically the max number of supported touch points and whether we’ll also be relaying GFx::GestureEvent’s (this tutorial doesn’t cover native gesture events, but you can use the same principles outlined in this section in order to expose them to Scaleform).

    class FxPlayerMultitouchInterface : public MultitouchInterface
    {
    public:
        // Return maximum number of touch points supported by hardware
        virtual unsigned GetMaxTouchPoints() const { return 2; }

        // Return a bit mask of supported gestures (none at the moment)
        virtual UInt32   GetSupportedGesturesMask() const { return 0; }

        // Is multitouch supported?
        virtual bool     SetMultitouchInputMode(MultitouchInputMode) { return true; }
    };

Once we send a reference of this interface class to our movie via SetMultitouchInterface, touch events are successfully propagated to the movie.